00:00:00.000 Started by upstream project "autotest-per-patch" build number 126234 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.019 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.019 The recommended git tool is: git 00:00:00.020 using credential 00000000-0000-0000-0000-000000000002 00:00:00.021 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.034 Fetching changes from the remote Git repository 00:00:00.037 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.073 Using shallow fetch with depth 1 00:00:00.073 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.074 > git --version # timeout=10 00:00:00.097 > git --version # 'git version 2.39.2' 00:00:00.097 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.128 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.128 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.744 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.753 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.764 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:02.764 > git config core.sparsecheckout # timeout=10 00:00:02.774 > git read-tree -mu HEAD # timeout=10 00:00:02.789 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:02.812 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:02.812 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:02.915 [Pipeline] Start of Pipeline 00:00:02.930 [Pipeline] library 00:00:02.932 Loading library shm_lib@master 00:00:02.932 Library shm_lib@master is cached. Copying from home. 00:00:02.951 [Pipeline] node 00:00:02.966 Running on WFP20 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:02.968 [Pipeline] { 00:00:02.981 [Pipeline] catchError 00:00:02.982 [Pipeline] { 00:00:02.999 [Pipeline] wrap 00:00:03.011 [Pipeline] { 00:00:03.021 [Pipeline] stage 00:00:03.023 [Pipeline] { (Prologue) 00:00:03.202 [Pipeline] sh 00:00:03.482 + logger -p user.info -t JENKINS-CI 00:00:03.504 [Pipeline] echo 00:00:03.506 Node: WFP20 00:00:03.513 [Pipeline] sh 00:00:03.808 [Pipeline] setCustomBuildProperty 00:00:03.822 [Pipeline] echo 00:00:03.823 Cleanup processes 00:00:03.827 [Pipeline] sh 00:00:04.106 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.106 649672 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.118 [Pipeline] sh 00:00:04.397 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.397 ++ grep -v 'sudo pgrep' 00:00:04.397 ++ awk '{print $1}' 00:00:04.397 + sudo kill -9 00:00:04.397 + true 00:00:04.408 [Pipeline] cleanWs 00:00:04.416 [WS-CLEANUP] Deleting project workspace... 00:00:04.416 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.421 [WS-CLEANUP] done 00:00:04.426 [Pipeline] setCustomBuildProperty 00:00:04.442 [Pipeline] sh 00:00:04.720 + sudo git config --global --replace-all safe.directory '*' 00:00:04.805 [Pipeline] httpRequest 00:00:04.823 [Pipeline] echo 00:00:04.824 Sorcerer 10.211.164.101 is alive 00:00:04.833 [Pipeline] httpRequest 00:00:04.838 HttpMethod: GET 00:00:04.838 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.838 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.841 Response Code: HTTP/1.1 200 OK 00:00:04.842 Success: Status code 200 is in the accepted range: 200,404 00:00:04.842 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.768 [Pipeline] sh 00:00:06.044 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.056 [Pipeline] httpRequest 00:00:06.081 [Pipeline] echo 00:00:06.082 Sorcerer 10.211.164.101 is alive 00:00:06.088 [Pipeline] httpRequest 00:00:06.091 HttpMethod: GET 00:00:06.091 URL: http://10.211.164.101/packages/spdk_cdc37ee83b9008feb075db6e5f474e1ec08c5b9a.tar.gz 00:00:06.092 Sending request to url: http://10.211.164.101/packages/spdk_cdc37ee83b9008feb075db6e5f474e1ec08c5b9a.tar.gz 00:00:06.111 Response Code: HTTP/1.1 200 OK 00:00:06.118 Success: Status code 200 is in the accepted range: 200,404 00:00:06.120 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_cdc37ee83b9008feb075db6e5f474e1ec08c5b9a.tar.gz 00:01:09.594 [Pipeline] sh 00:01:09.876 + tar --no-same-owner -xf spdk_cdc37ee83b9008feb075db6e5f474e1ec08c5b9a.tar.gz 00:01:12.482 [Pipeline] sh 00:01:12.761 + git -C spdk log --oneline -n5 00:01:12.761 cdc37ee83 env_dpdk: deprecate spdk_env_opts_init and spdk_env_init 00:01:12.762 24018edd4 all: replace spdk_env_opts_init/spdk_env_init with _ext variant 00:01:12.762 3269bc4bc env: add spdk_env_opts_init_ext() 00:01:12.762 d9917142f env: pack and assert size for spdk_env_opts 00:01:12.762 1bd83e221 sock: add spdk_sock_get_numa_socket_id 00:01:12.780 [Pipeline] } 00:01:12.803 [Pipeline] // stage 00:01:12.812 [Pipeline] stage 00:01:12.814 [Pipeline] { (Prepare) 00:01:12.832 [Pipeline] writeFile 00:01:12.970 [Pipeline] sh 00:01:13.252 + logger -p user.info -t JENKINS-CI 00:01:13.264 [Pipeline] sh 00:01:13.545 + logger -p user.info -t JENKINS-CI 00:01:13.556 [Pipeline] sh 00:01:13.837 + cat autorun-spdk.conf 00:01:13.837 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.837 SPDK_TEST_FUZZER_SHORT=1 00:01:13.837 SPDK_TEST_FUZZER=1 00:01:13.837 SPDK_RUN_UBSAN=1 00:01:13.844 RUN_NIGHTLY=0 00:01:13.851 [Pipeline] readFile 00:01:13.891 [Pipeline] withEnv 00:01:13.894 [Pipeline] { 00:01:13.909 [Pipeline] sh 00:01:14.186 + set -ex 00:01:14.186 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:01:14.186 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:14.186 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.186 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:14.186 ++ SPDK_TEST_FUZZER=1 00:01:14.186 ++ SPDK_RUN_UBSAN=1 00:01:14.186 ++ RUN_NIGHTLY=0 00:01:14.186 + case $SPDK_TEST_NVMF_NICS in 00:01:14.186 + DRIVERS= 00:01:14.186 + [[ -n '' ]] 00:01:14.186 + exit 0 00:01:14.195 [Pipeline] } 00:01:14.214 [Pipeline] // withEnv 00:01:14.220 [Pipeline] } 00:01:14.237 [Pipeline] // stage 00:01:14.247 [Pipeline] catchError 00:01:14.249 [Pipeline] { 00:01:14.265 [Pipeline] timeout 00:01:14.265 Timeout set to expire in 30 min 00:01:14.267 [Pipeline] { 00:01:14.283 [Pipeline] stage 00:01:14.285 [Pipeline] { (Tests) 00:01:14.301 [Pipeline] sh 00:01:14.583 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:14.583 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:14.583 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:01:14.583 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:01:14.583 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:14.583 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:14.583 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:01:14.583 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:14.583 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:14.583 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:14.583 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:01:14.583 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:14.583 + source /etc/os-release 00:01:14.583 ++ NAME='Fedora Linux' 00:01:14.583 ++ VERSION='38 (Cloud Edition)' 00:01:14.583 ++ ID=fedora 00:01:14.583 ++ VERSION_ID=38 00:01:14.583 ++ VERSION_CODENAME= 00:01:14.583 ++ PLATFORM_ID=platform:f38 00:01:14.583 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:14.583 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:14.583 ++ LOGO=fedora-logo-icon 00:01:14.583 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:14.583 ++ HOME_URL=https://fedoraproject.org/ 00:01:14.583 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:14.583 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:14.583 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:14.583 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:14.583 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:14.583 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:14.583 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:14.583 ++ SUPPORT_END=2024-05-14 00:01:14.583 ++ VARIANT='Cloud Edition' 00:01:14.583 ++ VARIANT_ID=cloud 00:01:14.583 + uname -a 00:01:14.583 Linux spdk-wfp-20 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:14.583 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:01:17.869 Hugepages 00:01:17.869 node hugesize free / total 00:01:17.869 node0 1048576kB 0 / 0 00:01:17.869 node0 2048kB 0 / 0 00:01:17.869 node1 1048576kB 0 / 0 00:01:17.869 node1 2048kB 0 / 0 00:01:17.869 00:01:17.869 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:17.869 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:17.869 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:17.869 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:17.869 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:17.869 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:17.869 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:17.869 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:17.869 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:17.869 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:17.869 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:17.869 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:17.869 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:17.869 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:17.869 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:17.869 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:17.869 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:17.869 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:17.869 + rm -f /tmp/spdk-ld-path 00:01:17.869 + source autorun-spdk.conf 00:01:17.869 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.869 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:17.869 ++ SPDK_TEST_FUZZER=1 00:01:17.869 ++ SPDK_RUN_UBSAN=1 00:01:17.869 ++ RUN_NIGHTLY=0 00:01:17.869 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:17.869 + [[ -n '' ]] 00:01:17.869 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:17.869 + for M in /var/spdk/build-*-manifest.txt 00:01:17.869 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:17.869 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:17.869 + for M in /var/spdk/build-*-manifest.txt 00:01:17.869 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:17.869 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:17.869 ++ uname 00:01:17.869 + [[ Linux == \L\i\n\u\x ]] 00:01:17.869 + sudo dmesg -T 00:01:17.869 + sudo dmesg --clear 00:01:17.869 + dmesg_pid=651132 00:01:17.869 + [[ Fedora Linux == FreeBSD ]] 00:01:17.869 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:17.869 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:17.869 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:17.869 + [[ -x /usr/src/fio-static/fio ]] 00:01:17.869 + export FIO_BIN=/usr/src/fio-static/fio 00:01:17.869 + FIO_BIN=/usr/src/fio-static/fio 00:01:17.869 + sudo dmesg -Tw 00:01:17.869 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:17.869 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:17.869 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:17.869 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:17.869 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:17.869 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:17.869 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:17.869 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:17.869 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:17.869 Test configuration: 00:01:17.869 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.869 SPDK_TEST_FUZZER_SHORT=1 00:01:17.869 SPDK_TEST_FUZZER=1 00:01:17.869 SPDK_RUN_UBSAN=1 00:01:17.869 RUN_NIGHTLY=0 20:51:45 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:01:17.869 20:51:45 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:17.869 20:51:45 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:17.869 20:51:45 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:17.869 20:51:45 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:17.869 20:51:45 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:17.869 20:51:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:17.869 20:51:45 -- paths/export.sh@5 -- $ export PATH 00:01:17.870 20:51:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:17.870 20:51:45 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:01:17.870 20:51:45 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:17.870 20:51:45 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721069505.XXXXXX 00:01:17.870 20:51:45 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721069505.IahDIN 00:01:17.870 20:51:45 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:17.870 20:51:45 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:17.870 20:51:45 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:01:17.870 20:51:45 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:17.870 20:51:45 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:17.870 20:51:45 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:17.870 20:51:45 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:17.870 20:51:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.870 20:51:45 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:17.870 20:51:45 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:17.870 20:51:45 -- pm/common@17 -- $ local monitor 00:01:17.870 20:51:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:17.870 20:51:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:17.870 20:51:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:17.870 20:51:45 -- pm/common@21 -- $ date +%s 00:01:17.870 20:51:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:17.870 20:51:45 -- pm/common@21 -- $ date +%s 00:01:17.870 20:51:45 -- pm/common@25 -- $ sleep 1 00:01:17.870 20:51:45 -- pm/common@21 -- $ date +%s 00:01:17.870 20:51:45 -- pm/common@21 -- $ date +%s 00:01:17.870 20:51:45 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721069505 00:01:17.870 20:51:45 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721069505 00:01:17.870 20:51:45 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721069505 00:01:17.870 20:51:45 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721069505 00:01:18.129 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721069505_collect-vmstat.pm.log 00:01:18.129 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721069505_collect-cpu-temp.pm.log 00:01:18.129 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721069505_collect-cpu-load.pm.log 00:01:18.129 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721069505_collect-bmc-pm.bmc.pm.log 00:01:19.064 20:51:46 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:19.064 20:51:46 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:19.064 20:51:46 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:19.064 20:51:46 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:19.064 20:51:46 -- spdk/autobuild.sh@16 -- $ date -u 00:01:19.064 Mon Jul 15 06:51:46 PM UTC 2024 00:01:19.064 20:51:46 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:19.064 v24.09-pre-226-gcdc37ee83 00:01:19.064 20:51:46 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:19.064 20:51:46 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:19.064 20:51:46 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:19.064 20:51:46 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:19.064 20:51:46 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:19.064 20:51:46 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.064 ************************************ 00:01:19.064 START TEST ubsan 00:01:19.064 ************************************ 00:01:19.064 20:51:46 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:19.064 using ubsan 00:01:19.064 00:01:19.064 real 0m0.000s 00:01:19.064 user 0m0.000s 00:01:19.064 sys 0m0.000s 00:01:19.064 20:51:46 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:19.064 20:51:46 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:19.064 ************************************ 00:01:19.064 END TEST ubsan 00:01:19.064 ************************************ 00:01:19.064 20:51:46 -- common/autotest_common.sh@1142 -- $ return 0 00:01:19.064 20:51:46 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:19.064 20:51:46 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:19.064 20:51:46 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:19.064 20:51:46 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:01:19.064 20:51:46 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:01:19.064 20:51:46 -- common/autobuild_common.sh@432 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:01:19.064 20:51:46 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:01:19.064 20:51:46 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:19.064 20:51:46 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.064 ************************************ 00:01:19.064 START TEST autobuild_llvm_precompile 00:01:19.064 ************************************ 00:01:19.064 20:51:46 autobuild_llvm_precompile -- common/autotest_common.sh@1123 -- $ _llvm_precompile 00:01:19.064 20:51:46 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:01:19.064 20:51:46 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 16.0.6 (Fedora 16.0.6-3.fc38) 00:01:19.064 Target: x86_64-redhat-linux-gnu 00:01:19.064 Thread model: posix 00:01:19.064 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:01:19.064 20:51:46 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=16 00:01:19.064 20:51:46 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-16 00:01:19.064 20:51:46 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-16 00:01:19.064 20:51:46 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-16 00:01:19.064 20:51:46 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-16 00:01:19.064 20:51:46 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:01:19.064 20:51:46 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:19.064 20:51:46 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a ]] 00:01:19.064 20:51:46 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a' 00:01:19.064 20:51:46 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:19.322 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:19.322 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:19.888 Using 'verbs' RDMA provider 00:01:35.703 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:47.916 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:47.916 Creating mk/config.mk...done. 00:01:47.916 Creating mk/cc.flags.mk...done. 00:01:47.916 Type 'make' to build. 00:01:47.916 00:01:47.916 real 0m28.725s 00:01:47.916 user 0m12.466s 00:01:47.916 sys 0m15.519s 00:01:47.916 20:52:14 autobuild_llvm_precompile -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:47.916 20:52:14 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:01:47.916 ************************************ 00:01:47.916 END TEST autobuild_llvm_precompile 00:01:47.916 ************************************ 00:01:47.916 20:52:15 -- common/autotest_common.sh@1142 -- $ return 0 00:01:47.916 20:52:15 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:47.916 20:52:15 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:47.916 20:52:15 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:47.916 20:52:15 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:47.916 20:52:15 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:48.175 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:48.175 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:48.433 Using 'verbs' RDMA provider 00:02:01.580 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:13.789 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:13.789 Creating mk/config.mk...done. 00:02:13.789 Creating mk/cc.flags.mk...done. 00:02:13.789 Type 'make' to build. 00:02:13.789 20:52:39 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:02:13.789 20:52:39 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:13.789 20:52:39 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:13.789 20:52:39 -- common/autotest_common.sh@10 -- $ set +x 00:02:13.789 ************************************ 00:02:13.789 START TEST make 00:02:13.789 ************************************ 00:02:13.789 20:52:39 make -- common/autotest_common.sh@1123 -- $ make -j112 00:02:13.789 make[1]: Nothing to be done for 'all'. 00:02:14.759 The Meson build system 00:02:14.759 Version: 1.3.1 00:02:14.759 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:02:14.759 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:14.759 Build type: native build 00:02:14.759 Project name: libvfio-user 00:02:14.759 Project version: 0.0.1 00:02:14.759 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:02:14.759 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:02:14.759 Host machine cpu family: x86_64 00:02:14.759 Host machine cpu: x86_64 00:02:14.759 Run-time dependency threads found: YES 00:02:14.759 Library dl found: YES 00:02:14.759 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:14.759 Run-time dependency json-c found: YES 0.17 00:02:14.759 Run-time dependency cmocka found: YES 1.1.7 00:02:14.759 Program pytest-3 found: NO 00:02:14.759 Program flake8 found: NO 00:02:14.759 Program misspell-fixer found: NO 00:02:14.759 Program restructuredtext-lint found: NO 00:02:14.759 Program valgrind found: YES (/usr/bin/valgrind) 00:02:14.759 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:14.759 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:14.759 Compiler for C supports arguments -Wwrite-strings: YES 00:02:14.759 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:14.759 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:14.759 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:14.759 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:14.759 Build targets in project: 8 00:02:14.759 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:14.759 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:14.759 00:02:14.759 libvfio-user 0.0.1 00:02:14.759 00:02:14.759 User defined options 00:02:14.759 buildtype : debug 00:02:14.759 default_library: static 00:02:14.759 libdir : /usr/local/lib 00:02:14.759 00:02:14.759 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:15.016 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:15.016 [1/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:15.016 [2/36] Compiling C object samples/lspci.p/lspci.c.o 00:02:15.016 [3/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:02:15.016 [4/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:15.016 [5/36] Compiling C object samples/null.p/null.c.o 00:02:15.016 [6/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:02:15.016 [7/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:02:15.016 [8/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:15.016 [9/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:15.016 [10/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:15.016 [11/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:02:15.016 [12/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:15.016 [13/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:15.016 [14/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:02:15.016 [15/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:02:15.016 [16/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:02:15.016 [17/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:15.016 [18/36] Compiling C object test/unit_tests.p/mocks.c.o 00:02:15.016 [19/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:15.016 [20/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:15.016 [21/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:15.016 [22/36] Compiling C object samples/server.p/server.c.o 00:02:15.016 [23/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:15.016 [24/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:15.016 [25/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:15.016 [26/36] Compiling C object samples/client.p/client.c.o 00:02:15.016 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:02:15.016 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:15.016 [29/36] Linking static target lib/libvfio-user.a 00:02:15.016 [30/36] Linking target samples/client 00:02:15.274 [31/36] Linking target samples/server 00:02:15.274 [32/36] Linking target samples/gpio-pci-idio-16 00:02:15.274 [33/36] Linking target test/unit_tests 00:02:15.274 [34/36] Linking target samples/null 00:02:15.274 [35/36] Linking target samples/lspci 00:02:15.274 [36/36] Linking target samples/shadow_ioeventfd_server 00:02:15.274 INFO: autodetecting backend as ninja 00:02:15.274 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:15.274 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:15.532 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:15.533 ninja: no work to do. 00:02:20.801 The Meson build system 00:02:20.801 Version: 1.3.1 00:02:20.801 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:02:20.801 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:02:20.801 Build type: native build 00:02:20.801 Program cat found: YES (/usr/bin/cat) 00:02:20.801 Project name: DPDK 00:02:20.801 Project version: 24.03.0 00:02:20.801 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:02:20.801 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:02:20.801 Host machine cpu family: x86_64 00:02:20.801 Host machine cpu: x86_64 00:02:20.801 Message: ## Building in Developer Mode ## 00:02:20.801 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:20.801 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:20.801 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:20.801 Program python3 found: YES (/usr/bin/python3) 00:02:20.801 Program cat found: YES (/usr/bin/cat) 00:02:20.801 Compiler for C supports arguments -march=native: YES 00:02:20.801 Checking for size of "void *" : 8 00:02:20.801 Checking for size of "void *" : 8 (cached) 00:02:20.801 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:20.801 Library m found: YES 00:02:20.801 Library numa found: YES 00:02:20.801 Has header "numaif.h" : YES 00:02:20.801 Library fdt found: NO 00:02:20.801 Library execinfo found: NO 00:02:20.801 Has header "execinfo.h" : YES 00:02:20.801 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:20.801 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:20.801 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:20.801 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:20.801 Run-time dependency openssl found: YES 3.0.9 00:02:20.801 Run-time dependency libpcap found: YES 1.10.4 00:02:20.801 Has header "pcap.h" with dependency libpcap: YES 00:02:20.801 Compiler for C supports arguments -Wcast-qual: YES 00:02:20.801 Compiler for C supports arguments -Wdeprecated: YES 00:02:20.801 Compiler for C supports arguments -Wformat: YES 00:02:20.801 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:20.801 Compiler for C supports arguments -Wformat-security: YES 00:02:20.801 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:20.801 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:20.801 Compiler for C supports arguments -Wnested-externs: YES 00:02:20.801 Compiler for C supports arguments -Wold-style-definition: YES 00:02:20.801 Compiler for C supports arguments -Wpointer-arith: YES 00:02:20.801 Compiler for C supports arguments -Wsign-compare: YES 00:02:20.801 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:20.801 Compiler for C supports arguments -Wundef: YES 00:02:20.801 Compiler for C supports arguments -Wwrite-strings: YES 00:02:20.801 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:20.801 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:02:20.801 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:20.801 Program objdump found: YES (/usr/bin/objdump) 00:02:20.801 Compiler for C supports arguments -mavx512f: YES 00:02:20.801 Checking if "AVX512 checking" compiles: YES 00:02:20.801 Fetching value of define "__SSE4_2__" : 1 00:02:20.801 Fetching value of define "__AES__" : 1 00:02:20.801 Fetching value of define "__AVX__" : 1 00:02:20.801 Fetching value of define "__AVX2__" : 1 00:02:20.801 Fetching value of define "__AVX512BW__" : 1 00:02:20.801 Fetching value of define "__AVX512CD__" : 1 00:02:20.801 Fetching value of define "__AVX512DQ__" : 1 00:02:20.801 Fetching value of define "__AVX512F__" : 1 00:02:20.801 Fetching value of define "__AVX512VL__" : 1 00:02:20.801 Fetching value of define "__PCLMUL__" : 1 00:02:20.801 Fetching value of define "__RDRND__" : 1 00:02:20.801 Fetching value of define "__RDSEED__" : 1 00:02:20.801 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:20.801 Fetching value of define "__znver1__" : (undefined) 00:02:20.801 Fetching value of define "__znver2__" : (undefined) 00:02:20.801 Fetching value of define "__znver3__" : (undefined) 00:02:20.801 Fetching value of define "__znver4__" : (undefined) 00:02:20.801 Compiler for C supports arguments -Wno-format-truncation: NO 00:02:20.801 Message: lib/log: Defining dependency "log" 00:02:20.801 Message: lib/kvargs: Defining dependency "kvargs" 00:02:20.801 Message: lib/telemetry: Defining dependency "telemetry" 00:02:20.801 Checking for function "getentropy" : NO 00:02:20.801 Message: lib/eal: Defining dependency "eal" 00:02:20.801 Message: lib/ring: Defining dependency "ring" 00:02:20.801 Message: lib/rcu: Defining dependency "rcu" 00:02:20.801 Message: lib/mempool: Defining dependency "mempool" 00:02:20.801 Message: lib/mbuf: Defining dependency "mbuf" 00:02:20.801 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:20.801 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:20.801 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:20.801 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:20.801 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:20.801 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:20.801 Compiler for C supports arguments -mpclmul: YES 00:02:20.801 Compiler for C supports arguments -maes: YES 00:02:20.801 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:20.801 Compiler for C supports arguments -mavx512bw: YES 00:02:20.802 Compiler for C supports arguments -mavx512dq: YES 00:02:20.802 Compiler for C supports arguments -mavx512vl: YES 00:02:20.802 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:20.802 Compiler for C supports arguments -mavx2: YES 00:02:20.802 Compiler for C supports arguments -mavx: YES 00:02:20.802 Message: lib/net: Defining dependency "net" 00:02:20.802 Message: lib/meter: Defining dependency "meter" 00:02:20.802 Message: lib/ethdev: Defining dependency "ethdev" 00:02:20.802 Message: lib/pci: Defining dependency "pci" 00:02:20.802 Message: lib/cmdline: Defining dependency "cmdline" 00:02:20.802 Message: lib/hash: Defining dependency "hash" 00:02:20.802 Message: lib/timer: Defining dependency "timer" 00:02:20.802 Message: lib/compressdev: Defining dependency "compressdev" 00:02:20.802 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:20.802 Message: lib/dmadev: Defining dependency "dmadev" 00:02:20.802 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:20.802 Message: lib/power: Defining dependency "power" 00:02:20.802 Message: lib/reorder: Defining dependency "reorder" 00:02:20.802 Message: lib/security: Defining dependency "security" 00:02:20.802 Has header "linux/userfaultfd.h" : YES 00:02:20.802 Has header "linux/vduse.h" : YES 00:02:20.802 Message: lib/vhost: Defining dependency "vhost" 00:02:20.802 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:02:20.802 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:20.802 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:20.802 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:20.802 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:20.802 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:20.802 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:20.802 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:20.802 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:20.802 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:20.802 Program doxygen found: YES (/usr/bin/doxygen) 00:02:20.802 Configuring doxy-api-html.conf using configuration 00:02:20.802 Configuring doxy-api-man.conf using configuration 00:02:20.802 Program mandb found: YES (/usr/bin/mandb) 00:02:20.802 Program sphinx-build found: NO 00:02:20.802 Configuring rte_build_config.h using configuration 00:02:20.802 Message: 00:02:20.802 ================= 00:02:20.802 Applications Enabled 00:02:20.802 ================= 00:02:20.802 00:02:20.802 apps: 00:02:20.802 00:02:20.802 00:02:20.802 Message: 00:02:20.802 ================= 00:02:20.802 Libraries Enabled 00:02:20.802 ================= 00:02:20.802 00:02:20.802 libs: 00:02:20.802 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:20.802 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:20.802 cryptodev, dmadev, power, reorder, security, vhost, 00:02:20.802 00:02:20.802 Message: 00:02:20.802 =============== 00:02:20.802 Drivers Enabled 00:02:20.802 =============== 00:02:20.802 00:02:20.802 common: 00:02:20.802 00:02:20.802 bus: 00:02:20.802 pci, vdev, 00:02:20.802 mempool: 00:02:20.802 ring, 00:02:20.802 dma: 00:02:20.802 00:02:20.802 net: 00:02:20.802 00:02:20.802 crypto: 00:02:20.802 00:02:20.802 compress: 00:02:20.802 00:02:20.802 vdpa: 00:02:20.802 00:02:20.802 00:02:20.802 Message: 00:02:20.802 ================= 00:02:20.802 Content Skipped 00:02:20.802 ================= 00:02:20.802 00:02:20.802 apps: 00:02:20.802 dumpcap: explicitly disabled via build config 00:02:20.802 graph: explicitly disabled via build config 00:02:20.802 pdump: explicitly disabled via build config 00:02:20.802 proc-info: explicitly disabled via build config 00:02:20.802 test-acl: explicitly disabled via build config 00:02:20.802 test-bbdev: explicitly disabled via build config 00:02:20.802 test-cmdline: explicitly disabled via build config 00:02:20.802 test-compress-perf: explicitly disabled via build config 00:02:20.802 test-crypto-perf: explicitly disabled via build config 00:02:20.802 test-dma-perf: explicitly disabled via build config 00:02:20.802 test-eventdev: explicitly disabled via build config 00:02:20.802 test-fib: explicitly disabled via build config 00:02:20.802 test-flow-perf: explicitly disabled via build config 00:02:20.802 test-gpudev: explicitly disabled via build config 00:02:20.802 test-mldev: explicitly disabled via build config 00:02:20.802 test-pipeline: explicitly disabled via build config 00:02:20.802 test-pmd: explicitly disabled via build config 00:02:20.802 test-regex: explicitly disabled via build config 00:02:20.802 test-sad: explicitly disabled via build config 00:02:20.802 test-security-perf: explicitly disabled via build config 00:02:20.802 00:02:20.802 libs: 00:02:20.802 argparse: explicitly disabled via build config 00:02:20.802 metrics: explicitly disabled via build config 00:02:20.802 acl: explicitly disabled via build config 00:02:20.802 bbdev: explicitly disabled via build config 00:02:20.802 bitratestats: explicitly disabled via build config 00:02:20.802 bpf: explicitly disabled via build config 00:02:20.802 cfgfile: explicitly disabled via build config 00:02:20.802 distributor: explicitly disabled via build config 00:02:20.802 efd: explicitly disabled via build config 00:02:20.802 eventdev: explicitly disabled via build config 00:02:20.802 dispatcher: explicitly disabled via build config 00:02:20.802 gpudev: explicitly disabled via build config 00:02:20.802 gro: explicitly disabled via build config 00:02:20.802 gso: explicitly disabled via build config 00:02:20.802 ip_frag: explicitly disabled via build config 00:02:20.802 jobstats: explicitly disabled via build config 00:02:20.802 latencystats: explicitly disabled via build config 00:02:20.802 lpm: explicitly disabled via build config 00:02:20.802 member: explicitly disabled via build config 00:02:20.802 pcapng: explicitly disabled via build config 00:02:20.802 rawdev: explicitly disabled via build config 00:02:20.802 regexdev: explicitly disabled via build config 00:02:20.802 mldev: explicitly disabled via build config 00:02:20.802 rib: explicitly disabled via build config 00:02:20.802 sched: explicitly disabled via build config 00:02:20.802 stack: explicitly disabled via build config 00:02:20.802 ipsec: explicitly disabled via build config 00:02:20.802 pdcp: explicitly disabled via build config 00:02:20.802 fib: explicitly disabled via build config 00:02:20.802 port: explicitly disabled via build config 00:02:20.802 pdump: explicitly disabled via build config 00:02:20.802 table: explicitly disabled via build config 00:02:20.802 pipeline: explicitly disabled via build config 00:02:20.802 graph: explicitly disabled via build config 00:02:20.802 node: explicitly disabled via build config 00:02:20.802 00:02:20.802 drivers: 00:02:20.802 common/cpt: not in enabled drivers build config 00:02:20.802 common/dpaax: not in enabled drivers build config 00:02:20.802 common/iavf: not in enabled drivers build config 00:02:20.802 common/idpf: not in enabled drivers build config 00:02:20.802 common/ionic: not in enabled drivers build config 00:02:20.802 common/mvep: not in enabled drivers build config 00:02:20.802 common/octeontx: not in enabled drivers build config 00:02:20.802 bus/auxiliary: not in enabled drivers build config 00:02:20.802 bus/cdx: not in enabled drivers build config 00:02:20.802 bus/dpaa: not in enabled drivers build config 00:02:20.802 bus/fslmc: not in enabled drivers build config 00:02:20.802 bus/ifpga: not in enabled drivers build config 00:02:20.802 bus/platform: not in enabled drivers build config 00:02:20.802 bus/uacce: not in enabled drivers build config 00:02:20.802 bus/vmbus: not in enabled drivers build config 00:02:20.802 common/cnxk: not in enabled drivers build config 00:02:20.802 common/mlx5: not in enabled drivers build config 00:02:20.802 common/nfp: not in enabled drivers build config 00:02:20.802 common/nitrox: not in enabled drivers build config 00:02:20.802 common/qat: not in enabled drivers build config 00:02:20.802 common/sfc_efx: not in enabled drivers build config 00:02:20.802 mempool/bucket: not in enabled drivers build config 00:02:20.802 mempool/cnxk: not in enabled drivers build config 00:02:20.802 mempool/dpaa: not in enabled drivers build config 00:02:20.802 mempool/dpaa2: not in enabled drivers build config 00:02:20.802 mempool/octeontx: not in enabled drivers build config 00:02:20.802 mempool/stack: not in enabled drivers build config 00:02:20.802 dma/cnxk: not in enabled drivers build config 00:02:20.802 dma/dpaa: not in enabled drivers build config 00:02:20.802 dma/dpaa2: not in enabled drivers build config 00:02:20.802 dma/hisilicon: not in enabled drivers build config 00:02:20.802 dma/idxd: not in enabled drivers build config 00:02:20.802 dma/ioat: not in enabled drivers build config 00:02:20.802 dma/skeleton: not in enabled drivers build config 00:02:20.802 net/af_packet: not in enabled drivers build config 00:02:20.802 net/af_xdp: not in enabled drivers build config 00:02:20.802 net/ark: not in enabled drivers build config 00:02:20.802 net/atlantic: not in enabled drivers build config 00:02:20.802 net/avp: not in enabled drivers build config 00:02:20.802 net/axgbe: not in enabled drivers build config 00:02:20.802 net/bnx2x: not in enabled drivers build config 00:02:20.802 net/bnxt: not in enabled drivers build config 00:02:20.802 net/bonding: not in enabled drivers build config 00:02:20.802 net/cnxk: not in enabled drivers build config 00:02:20.802 net/cpfl: not in enabled drivers build config 00:02:20.802 net/cxgbe: not in enabled drivers build config 00:02:20.802 net/dpaa: not in enabled drivers build config 00:02:20.802 net/dpaa2: not in enabled drivers build config 00:02:20.802 net/e1000: not in enabled drivers build config 00:02:20.802 net/ena: not in enabled drivers build config 00:02:20.802 net/enetc: not in enabled drivers build config 00:02:20.802 net/enetfec: not in enabled drivers build config 00:02:20.802 net/enic: not in enabled drivers build config 00:02:20.802 net/failsafe: not in enabled drivers build config 00:02:20.802 net/fm10k: not in enabled drivers build config 00:02:20.802 net/gve: not in enabled drivers build config 00:02:20.802 net/hinic: not in enabled drivers build config 00:02:20.802 net/hns3: not in enabled drivers build config 00:02:20.802 net/i40e: not in enabled drivers build config 00:02:20.802 net/iavf: not in enabled drivers build config 00:02:20.802 net/ice: not in enabled drivers build config 00:02:20.802 net/idpf: not in enabled drivers build config 00:02:20.802 net/igc: not in enabled drivers build config 00:02:20.802 net/ionic: not in enabled drivers build config 00:02:20.802 net/ipn3ke: not in enabled drivers build config 00:02:20.802 net/ixgbe: not in enabled drivers build config 00:02:20.802 net/mana: not in enabled drivers build config 00:02:20.803 net/memif: not in enabled drivers build config 00:02:20.803 net/mlx4: not in enabled drivers build config 00:02:20.803 net/mlx5: not in enabled drivers build config 00:02:20.803 net/mvneta: not in enabled drivers build config 00:02:20.803 net/mvpp2: not in enabled drivers build config 00:02:20.803 net/netvsc: not in enabled drivers build config 00:02:20.803 net/nfb: not in enabled drivers build config 00:02:20.803 net/nfp: not in enabled drivers build config 00:02:20.803 net/ngbe: not in enabled drivers build config 00:02:20.803 net/null: not in enabled drivers build config 00:02:20.803 net/octeontx: not in enabled drivers build config 00:02:20.803 net/octeon_ep: not in enabled drivers build config 00:02:20.803 net/pcap: not in enabled drivers build config 00:02:20.803 net/pfe: not in enabled drivers build config 00:02:20.803 net/qede: not in enabled drivers build config 00:02:20.803 net/ring: not in enabled drivers build config 00:02:20.803 net/sfc: not in enabled drivers build config 00:02:20.803 net/softnic: not in enabled drivers build config 00:02:20.803 net/tap: not in enabled drivers build config 00:02:20.803 net/thunderx: not in enabled drivers build config 00:02:20.803 net/txgbe: not in enabled drivers build config 00:02:20.803 net/vdev_netvsc: not in enabled drivers build config 00:02:20.803 net/vhost: not in enabled drivers build config 00:02:20.803 net/virtio: not in enabled drivers build config 00:02:20.803 net/vmxnet3: not in enabled drivers build config 00:02:20.803 raw/*: missing internal dependency, "rawdev" 00:02:20.803 crypto/armv8: not in enabled drivers build config 00:02:20.803 crypto/bcmfs: not in enabled drivers build config 00:02:20.803 crypto/caam_jr: not in enabled drivers build config 00:02:20.803 crypto/ccp: not in enabled drivers build config 00:02:20.803 crypto/cnxk: not in enabled drivers build config 00:02:20.803 crypto/dpaa_sec: not in enabled drivers build config 00:02:20.803 crypto/dpaa2_sec: not in enabled drivers build config 00:02:20.803 crypto/ipsec_mb: not in enabled drivers build config 00:02:20.803 crypto/mlx5: not in enabled drivers build config 00:02:20.803 crypto/mvsam: not in enabled drivers build config 00:02:20.803 crypto/nitrox: not in enabled drivers build config 00:02:20.803 crypto/null: not in enabled drivers build config 00:02:20.803 crypto/octeontx: not in enabled drivers build config 00:02:20.803 crypto/openssl: not in enabled drivers build config 00:02:20.803 crypto/scheduler: not in enabled drivers build config 00:02:20.803 crypto/uadk: not in enabled drivers build config 00:02:20.803 crypto/virtio: not in enabled drivers build config 00:02:20.803 compress/isal: not in enabled drivers build config 00:02:20.803 compress/mlx5: not in enabled drivers build config 00:02:20.803 compress/nitrox: not in enabled drivers build config 00:02:20.803 compress/octeontx: not in enabled drivers build config 00:02:20.803 compress/zlib: not in enabled drivers build config 00:02:20.803 regex/*: missing internal dependency, "regexdev" 00:02:20.803 ml/*: missing internal dependency, "mldev" 00:02:20.803 vdpa/ifc: not in enabled drivers build config 00:02:20.803 vdpa/mlx5: not in enabled drivers build config 00:02:20.803 vdpa/nfp: not in enabled drivers build config 00:02:20.803 vdpa/sfc: not in enabled drivers build config 00:02:20.803 event/*: missing internal dependency, "eventdev" 00:02:20.803 baseband/*: missing internal dependency, "bbdev" 00:02:20.803 gpu/*: missing internal dependency, "gpudev" 00:02:20.803 00:02:20.803 00:02:20.803 Build targets in project: 85 00:02:20.803 00:02:20.803 DPDK 24.03.0 00:02:20.803 00:02:20.803 User defined options 00:02:20.803 buildtype : debug 00:02:20.803 default_library : static 00:02:20.803 libdir : lib 00:02:20.803 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:02:20.803 c_args : -fPIC -Werror 00:02:20.803 c_link_args : 00:02:20.803 cpu_instruction_set: native 00:02:20.803 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:02:20.803 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:02:20.803 enable_docs : false 00:02:20.803 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:20.803 enable_kmods : false 00:02:20.803 max_lcores : 128 00:02:20.803 tests : false 00:02:20.803 00:02:20.803 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:21.064 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:02:21.329 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:21.329 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:21.329 [3/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:21.329 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:21.329 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:21.329 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:21.329 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:21.329 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:21.329 [9/268] Linking static target lib/librte_kvargs.a 00:02:21.329 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:21.329 [11/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:21.329 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:21.329 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:21.329 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:21.329 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:21.329 [16/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:21.329 [17/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:21.329 [18/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:21.329 [19/268] Linking static target lib/librte_log.a 00:02:21.329 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:21.329 [21/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:21.329 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:21.329 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:21.329 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:21.329 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:21.329 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:21.329 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:21.329 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:21.329 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:21.329 [30/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:21.329 [31/268] Linking static target lib/librte_pci.a 00:02:21.329 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:21.329 [33/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:21.587 [34/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:21.587 [35/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:21.587 [36/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.587 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:21.587 [38/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.587 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:21.587 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:21.587 [41/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:21.847 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:21.847 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:21.847 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:21.847 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:21.847 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:21.847 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:21.847 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:21.847 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:21.847 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:21.847 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:21.847 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:21.847 [53/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:21.847 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:21.847 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:21.847 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:21.847 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:21.847 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:21.847 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:21.847 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:21.847 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:21.847 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:21.847 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:21.847 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:21.847 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:21.847 [66/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:21.847 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:21.847 [68/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:21.847 [69/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:21.847 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:21.847 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:21.847 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:21.847 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:21.847 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:21.847 [75/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:21.847 [76/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:21.847 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:21.847 [78/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:21.847 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:21.847 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:21.847 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:21.847 [82/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:21.847 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:21.847 [84/268] Linking static target lib/librte_meter.a 00:02:21.847 [85/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:21.847 [86/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:21.847 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:21.847 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:21.847 [89/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:21.847 [90/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:21.847 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:21.847 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:21.847 [93/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:21.847 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:21.847 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:21.847 [96/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:21.847 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:21.847 [98/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:21.847 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:21.847 [100/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:21.847 [101/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:21.847 [102/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:21.847 [103/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:21.847 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:21.847 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:21.847 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:21.847 [107/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:21.847 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:21.847 [109/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:21.847 [110/268] Linking static target lib/librte_ring.a 00:02:21.847 [111/268] Linking static target lib/librte_telemetry.a 00:02:21.847 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:21.847 [113/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:21.847 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:21.847 [115/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:21.847 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:21.847 [117/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:21.847 [118/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:21.847 [119/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:21.847 [120/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:21.847 [121/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:21.847 [122/268] Linking static target lib/librte_timer.a 00:02:21.847 [123/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:21.847 [124/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:21.847 [125/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:21.847 [126/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:21.847 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:21.847 [128/268] Linking static target lib/librte_eal.a 00:02:21.847 [129/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:21.847 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:21.847 [131/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.847 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:21.847 [133/268] Linking static target lib/librte_cmdline.a 00:02:21.847 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:21.847 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:21.847 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:21.847 [137/268] Linking static target lib/librte_dmadev.a 00:02:21.847 [138/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:21.847 [139/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:21.847 [140/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:21.847 [141/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:21.847 [142/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:21.847 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:21.847 [144/268] Linking static target lib/librte_net.a 00:02:21.847 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:21.847 [146/268] Linking static target lib/librte_mempool.a 00:02:22.106 [147/268] Linking static target lib/librte_rcu.a 00:02:22.106 [148/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:22.106 [149/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:22.106 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:22.106 [151/268] Linking target lib/librte_log.so.24.1 00:02:22.106 [152/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:22.106 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:22.106 [154/268] Linking static target lib/librte_compressdev.a 00:02:22.106 [155/268] Linking static target lib/librte_mbuf.a 00:02:22.106 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:22.106 [157/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:22.106 [158/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:22.106 [159/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.106 [160/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:22.106 [161/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:22.106 [162/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:22.106 [163/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:22.106 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:22.106 [165/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:22.106 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:22.106 [167/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:22.106 [168/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:22.106 [169/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:22.106 [170/268] Linking static target lib/librte_hash.a 00:02:22.106 [171/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.106 [172/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:22.106 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:22.106 [174/268] Linking target lib/librte_kvargs.so.24.1 00:02:22.106 [175/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:22.106 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:22.364 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:22.365 [178/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:22.365 [179/268] Linking static target lib/librte_reorder.a 00:02:22.365 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:22.365 [181/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:22.365 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:22.365 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:22.365 [184/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:22.365 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:22.365 [186/268] Linking static target lib/librte_power.a 00:02:22.365 [187/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.365 [188/268] Linking static target lib/librte_cryptodev.a 00:02:22.365 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:22.365 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:22.365 [191/268] Linking static target lib/librte_security.a 00:02:22.365 [192/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.365 [193/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:22.365 [194/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:22.365 [195/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:22.365 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:22.365 [197/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.365 [198/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:22.365 [199/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:22.365 [200/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:22.365 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:22.365 [202/268] Linking static target drivers/librte_bus_vdev.a 00:02:22.365 [203/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:22.365 [204/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.365 [205/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:22.365 [206/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:22.624 [207/268] Linking static target drivers/librte_bus_pci.a 00:02:22.624 [208/268] Linking target lib/librte_telemetry.so.24.1 00:02:22.624 [209/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:22.624 [210/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:22.624 [211/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:22.624 [212/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:22.624 [213/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:22.624 [214/268] Linking static target lib/librte_ethdev.a 00:02:22.624 [215/268] Linking static target drivers/librte_mempool_ring.a 00:02:22.624 [216/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.624 [217/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:22.624 [218/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.882 [219/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.882 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.882 [221/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.882 [222/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.882 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.141 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:23.141 [225/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.141 [226/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.141 [227/268] Linking static target lib/librte_vhost.a 00:02:23.400 [228/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.400 [229/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.773 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.338 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.902 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.188 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.188 [234/268] Linking target lib/librte_eal.so.24.1 00:02:35.188 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:35.188 [236/268] Linking target lib/librte_ring.so.24.1 00:02:35.188 [237/268] Linking target lib/librte_meter.so.24.1 00:02:35.188 [238/268] Linking target lib/librte_dmadev.so.24.1 00:02:35.188 [239/268] Linking target lib/librte_timer.so.24.1 00:02:35.188 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:35.188 [241/268] Linking target lib/librte_pci.so.24.1 00:02:35.188 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:35.188 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:35.188 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:35.188 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:35.188 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:35.188 [247/268] Linking target lib/librte_mempool.so.24.1 00:02:35.188 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:35.188 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:35.188 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:35.188 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:35.188 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:35.188 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:35.476 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:35.476 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:35.476 [256/268] Linking target lib/librte_net.so.24.1 00:02:35.476 [257/268] Linking target lib/librte_compressdev.so.24.1 00:02:35.476 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:35.733 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:35.733 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:35.733 [261/268] Linking target lib/librte_hash.so.24.1 00:02:35.733 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:35.733 [263/268] Linking target lib/librte_ethdev.so.24.1 00:02:35.733 [264/268] Linking target lib/librte_security.so.24.1 00:02:35.733 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:35.733 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:35.991 [267/268] Linking target lib/librte_power.so.24.1 00:02:35.991 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:35.991 INFO: autodetecting backend as ninja 00:02:35.991 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:36.921 CC lib/log/log.o 00:02:36.921 CC lib/log/log_flags.o 00:02:36.921 CC lib/log/log_deprecated.o 00:02:36.921 CC lib/ut_mock/mock.o 00:02:36.921 CC lib/ut/ut.o 00:02:36.921 LIB libspdk_log.a 00:02:36.921 LIB libspdk_ut_mock.a 00:02:36.921 LIB libspdk_ut.a 00:02:37.179 CXX lib/trace_parser/trace.o 00:02:37.179 CC lib/util/base64.o 00:02:37.179 CC lib/util/cpuset.o 00:02:37.179 CC lib/util/bit_array.o 00:02:37.179 CC lib/util/crc16.o 00:02:37.179 CC lib/util/crc32.o 00:02:37.179 CC lib/util/crc32_ieee.o 00:02:37.436 CC lib/util/crc32c.o 00:02:37.436 CC lib/util/crc64.o 00:02:37.436 CC lib/util/dif.o 00:02:37.436 CC lib/util/fd.o 00:02:37.436 CC lib/util/fd_group.o 00:02:37.436 CC lib/util/file.o 00:02:37.436 CC lib/util/hexlify.o 00:02:37.436 CC lib/util/iov.o 00:02:37.436 CC lib/util/math.o 00:02:37.436 CC lib/util/net.o 00:02:37.436 CC lib/util/pipe.o 00:02:37.436 CC lib/util/strerror_tls.o 00:02:37.436 CC lib/util/string.o 00:02:37.436 CC lib/util/uuid.o 00:02:37.436 CC lib/dma/dma.o 00:02:37.436 CC lib/util/xor.o 00:02:37.436 CC lib/util/zipf.o 00:02:37.436 CC lib/ioat/ioat.o 00:02:37.436 CC lib/vfio_user/host/vfio_user_pci.o 00:02:37.436 CC lib/vfio_user/host/vfio_user.o 00:02:37.436 LIB libspdk_dma.a 00:02:37.436 LIB libspdk_ioat.a 00:02:37.694 LIB libspdk_vfio_user.a 00:02:37.695 LIB libspdk_util.a 00:02:37.695 LIB libspdk_trace_parser.a 00:02:37.953 CC lib/env_dpdk/env.o 00:02:37.953 CC lib/env_dpdk/pci.o 00:02:37.953 CC lib/env_dpdk/memory.o 00:02:37.953 CC lib/env_dpdk/init.o 00:02:37.953 CC lib/env_dpdk/pci_vmd.o 00:02:37.953 CC lib/env_dpdk/threads.o 00:02:37.953 CC lib/env_dpdk/pci_ioat.o 00:02:37.953 CC lib/env_dpdk/pci_virtio.o 00:02:37.953 CC lib/env_dpdk/pci_idxd.o 00:02:37.953 CC lib/env_dpdk/pci_event.o 00:02:37.953 CC lib/env_dpdk/sigbus_handler.o 00:02:37.953 CC lib/env_dpdk/pci_dpdk.o 00:02:37.953 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:37.953 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:37.953 CC lib/json/json_util.o 00:02:37.953 CC lib/json/json_parse.o 00:02:37.953 CC lib/json/json_write.o 00:02:37.953 CC lib/vmd/led.o 00:02:37.953 CC lib/vmd/vmd.o 00:02:37.953 CC lib/rdma_provider/common.o 00:02:37.953 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:37.953 CC lib/conf/conf.o 00:02:37.953 CC lib/rdma_utils/rdma_utils.o 00:02:37.953 CC lib/idxd/idxd_kernel.o 00:02:37.953 CC lib/idxd/idxd.o 00:02:37.953 CC lib/idxd/idxd_user.o 00:02:38.213 LIB libspdk_rdma_provider.a 00:02:38.213 LIB libspdk_conf.a 00:02:38.213 LIB libspdk_json.a 00:02:38.213 LIB libspdk_rdma_utils.a 00:02:38.213 LIB libspdk_idxd.a 00:02:38.213 LIB libspdk_vmd.a 00:02:38.472 CC lib/jsonrpc/jsonrpc_server.o 00:02:38.472 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:38.472 CC lib/jsonrpc/jsonrpc_client.o 00:02:38.472 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:38.732 LIB libspdk_jsonrpc.a 00:02:38.991 LIB libspdk_env_dpdk.a 00:02:38.991 CC lib/rpc/rpc.o 00:02:38.991 LIB libspdk_rpc.a 00:02:39.250 CC lib/trace/trace.o 00:02:39.250 CC lib/trace/trace_flags.o 00:02:39.250 CC lib/trace/trace_rpc.o 00:02:39.509 CC lib/notify/notify.o 00:02:39.509 CC lib/notify/notify_rpc.o 00:02:39.509 CC lib/keyring/keyring.o 00:02:39.509 CC lib/keyring/keyring_rpc.o 00:02:39.509 LIB libspdk_trace.a 00:02:39.509 LIB libspdk_notify.a 00:02:39.509 LIB libspdk_keyring.a 00:02:39.768 CC lib/thread/thread.o 00:02:39.768 CC lib/thread/iobuf.o 00:02:39.768 CC lib/sock/sock.o 00:02:39.768 CC lib/sock/sock_rpc.o 00:02:40.027 LIB libspdk_sock.a 00:02:40.286 CC lib/nvme/nvme_ctrlr.o 00:02:40.286 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:40.286 CC lib/nvme/nvme_ns_cmd.o 00:02:40.286 CC lib/nvme/nvme_fabric.o 00:02:40.286 CC lib/nvme/nvme_pcie_common.o 00:02:40.286 CC lib/nvme/nvme_ns.o 00:02:40.286 CC lib/nvme/nvme_pcie.o 00:02:40.286 CC lib/nvme/nvme_qpair.o 00:02:40.286 CC lib/nvme/nvme.o 00:02:40.286 CC lib/nvme/nvme_quirks.o 00:02:40.286 CC lib/nvme/nvme_transport.o 00:02:40.286 CC lib/nvme/nvme_discovery.o 00:02:40.286 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:40.286 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:40.286 CC lib/nvme/nvme_tcp.o 00:02:40.286 CC lib/nvme/nvme_opal.o 00:02:40.286 CC lib/nvme/nvme_poll_group.o 00:02:40.286 CC lib/nvme/nvme_io_msg.o 00:02:40.544 CC lib/nvme/nvme_zns.o 00:02:40.545 CC lib/nvme/nvme_stubs.o 00:02:40.545 CC lib/nvme/nvme_auth.o 00:02:40.545 CC lib/nvme/nvme_cuse.o 00:02:40.545 CC lib/nvme/nvme_vfio_user.o 00:02:40.545 CC lib/nvme/nvme_rdma.o 00:02:40.545 LIB libspdk_thread.a 00:02:40.803 CC lib/init/json_config.o 00:02:40.803 CC lib/init/subsystem.o 00:02:40.803 CC lib/init/subsystem_rpc.o 00:02:40.803 CC lib/init/rpc.o 00:02:40.803 CC lib/accel/accel.o 00:02:40.803 CC lib/accel/accel_rpc.o 00:02:40.803 CC lib/blob/blobstore.o 00:02:40.803 CC lib/vfu_tgt/tgt_endpoint.o 00:02:40.803 CC lib/blob/request.o 00:02:40.803 CC lib/blob/blob_bs_dev.o 00:02:40.803 CC lib/accel/accel_sw.o 00:02:40.803 CC lib/blob/zeroes.o 00:02:40.803 CC lib/vfu_tgt/tgt_rpc.o 00:02:40.803 CC lib/virtio/virtio_vhost_user.o 00:02:40.803 CC lib/virtio/virtio.o 00:02:40.803 CC lib/virtio/virtio_pci.o 00:02:40.803 CC lib/virtio/virtio_vfio_user.o 00:02:41.062 LIB libspdk_init.a 00:02:41.062 LIB libspdk_vfu_tgt.a 00:02:41.062 LIB libspdk_virtio.a 00:02:41.322 CC lib/event/app.o 00:02:41.322 CC lib/event/reactor.o 00:02:41.322 CC lib/event/log_rpc.o 00:02:41.322 CC lib/event/app_rpc.o 00:02:41.322 CC lib/event/scheduler_static.o 00:02:41.581 LIB libspdk_accel.a 00:02:41.581 LIB libspdk_event.a 00:02:41.581 LIB libspdk_nvme.a 00:02:41.841 CC lib/bdev/bdev.o 00:02:41.841 CC lib/bdev/bdev_rpc.o 00:02:41.841 CC lib/bdev/bdev_zone.o 00:02:41.841 CC lib/bdev/part.o 00:02:41.841 CC lib/bdev/scsi_nvme.o 00:02:42.779 LIB libspdk_blob.a 00:02:42.779 CC lib/lvol/lvol.o 00:02:42.779 CC lib/blobfs/tree.o 00:02:42.779 CC lib/blobfs/blobfs.o 00:02:43.403 LIB libspdk_lvol.a 00:02:43.403 LIB libspdk_blobfs.a 00:02:43.662 LIB libspdk_bdev.a 00:02:43.920 CC lib/ublk/ublk.o 00:02:43.920 CC lib/ublk/ublk_rpc.o 00:02:43.920 CC lib/nbd/nbd.o 00:02:43.920 CC lib/nbd/nbd_rpc.o 00:02:43.920 CC lib/scsi/dev.o 00:02:43.920 CC lib/scsi/port.o 00:02:43.920 CC lib/scsi/lun.o 00:02:43.920 CC lib/nvmf/ctrlr.o 00:02:43.920 CC lib/scsi/scsi.o 00:02:43.920 CC lib/nvmf/ctrlr_discovery.o 00:02:43.920 CC lib/scsi/scsi_bdev.o 00:02:43.920 CC lib/nvmf/nvmf.o 00:02:43.920 CC lib/nvmf/ctrlr_bdev.o 00:02:43.920 CC lib/scsi/scsi_pr.o 00:02:43.920 CC lib/nvmf/subsystem.o 00:02:43.920 CC lib/scsi/scsi_rpc.o 00:02:43.920 CC lib/nvmf/tcp.o 00:02:43.920 CC lib/scsi/task.o 00:02:43.920 CC lib/ftl/ftl_core.o 00:02:43.920 CC lib/nvmf/nvmf_rpc.o 00:02:43.920 CC lib/ftl/ftl_init.o 00:02:43.920 CC lib/nvmf/transport.o 00:02:43.920 CC lib/ftl/ftl_layout.o 00:02:43.920 CC lib/ftl/ftl_debug.o 00:02:43.920 CC lib/ftl/ftl_io.o 00:02:43.920 CC lib/nvmf/stubs.o 00:02:43.920 CC lib/nvmf/mdns_server.o 00:02:43.920 CC lib/ftl/ftl_sb.o 00:02:43.920 CC lib/nvmf/vfio_user.o 00:02:43.920 CC lib/ftl/ftl_l2p.o 00:02:43.920 CC lib/nvmf/rdma.o 00:02:43.920 CC lib/ftl/ftl_l2p_flat.o 00:02:43.920 CC lib/nvmf/auth.o 00:02:43.920 CC lib/ftl/ftl_nv_cache.o 00:02:43.920 CC lib/ftl/ftl_band.o 00:02:43.920 CC lib/ftl/ftl_band_ops.o 00:02:43.920 CC lib/ftl/ftl_writer.o 00:02:43.920 CC lib/ftl/ftl_rq.o 00:02:43.920 CC lib/ftl/ftl_reloc.o 00:02:43.920 CC lib/ftl/ftl_l2p_cache.o 00:02:43.920 CC lib/ftl/ftl_p2l.o 00:02:43.920 CC lib/ftl/mngt/ftl_mngt.o 00:02:43.920 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:43.920 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:43.920 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:43.920 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:43.920 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:43.920 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:43.920 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:43.920 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:43.920 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:43.920 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:43.920 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:43.920 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:43.920 CC lib/ftl/utils/ftl_conf.o 00:02:43.920 CC lib/ftl/utils/ftl_mempool.o 00:02:43.920 CC lib/ftl/utils/ftl_md.o 00:02:43.920 CC lib/ftl/utils/ftl_property.o 00:02:43.920 CC lib/ftl/utils/ftl_bitmap.o 00:02:43.920 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:43.920 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:43.920 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:43.920 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:43.920 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:43.920 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:43.920 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:43.920 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:43.920 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:43.920 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:43.920 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:43.920 CC lib/ftl/base/ftl_base_dev.o 00:02:43.920 CC lib/ftl/base/ftl_base_bdev.o 00:02:43.920 CC lib/ftl/ftl_trace.o 00:02:44.180 LIB libspdk_nbd.a 00:02:44.180 LIB libspdk_ublk.a 00:02:44.438 LIB libspdk_scsi.a 00:02:44.438 LIB libspdk_ftl.a 00:02:44.697 CC lib/vhost/vhost.o 00:02:44.697 CC lib/vhost/vhost_rpc.o 00:02:44.697 CC lib/vhost/vhost_scsi.o 00:02:44.697 CC lib/vhost/vhost_blk.o 00:02:44.697 CC lib/vhost/rte_vhost_user.o 00:02:44.697 CC lib/iscsi/init_grp.o 00:02:44.697 CC lib/iscsi/conn.o 00:02:44.697 CC lib/iscsi/iscsi.o 00:02:44.697 CC lib/iscsi/md5.o 00:02:44.697 CC lib/iscsi/param.o 00:02:44.697 CC lib/iscsi/iscsi_subsystem.o 00:02:44.697 CC lib/iscsi/portal_grp.o 00:02:44.697 CC lib/iscsi/tgt_node.o 00:02:44.697 CC lib/iscsi/iscsi_rpc.o 00:02:44.697 CC lib/iscsi/task.o 00:02:45.265 LIB libspdk_nvmf.a 00:02:45.265 LIB libspdk_vhost.a 00:02:45.524 LIB libspdk_iscsi.a 00:02:45.784 CC module/vfu_device/vfu_virtio.o 00:02:45.784 CC module/vfu_device/vfu_virtio_blk.o 00:02:45.784 CC module/vfu_device/vfu_virtio_scsi.o 00:02:45.784 CC module/vfu_device/vfu_virtio_rpc.o 00:02:45.784 CC module/env_dpdk/env_dpdk_rpc.o 00:02:46.042 CC module/accel/error/accel_error_rpc.o 00:02:46.042 CC module/accel/iaa/accel_iaa.o 00:02:46.042 LIB libspdk_env_dpdk_rpc.a 00:02:46.042 CC module/accel/error/accel_error.o 00:02:46.042 CC module/accel/iaa/accel_iaa_rpc.o 00:02:46.042 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:46.042 CC module/scheduler/gscheduler/gscheduler.o 00:02:46.042 CC module/accel/ioat/accel_ioat.o 00:02:46.042 CC module/accel/dsa/accel_dsa.o 00:02:46.042 CC module/accel/ioat/accel_ioat_rpc.o 00:02:46.042 CC module/keyring/file/keyring.o 00:02:46.042 CC module/keyring/file/keyring_rpc.o 00:02:46.042 CC module/accel/dsa/accel_dsa_rpc.o 00:02:46.042 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:46.042 CC module/blob/bdev/blob_bdev.o 00:02:46.042 CC module/sock/posix/posix.o 00:02:46.042 CC module/keyring/linux/keyring.o 00:02:46.042 CC module/keyring/linux/keyring_rpc.o 00:02:46.042 LIB libspdk_keyring_file.a 00:02:46.042 LIB libspdk_accel_error.a 00:02:46.043 LIB libspdk_scheduler_dpdk_governor.a 00:02:46.043 LIB libspdk_scheduler_dynamic.a 00:02:46.043 LIB libspdk_scheduler_gscheduler.a 00:02:46.043 LIB libspdk_accel_iaa.a 00:02:46.043 LIB libspdk_keyring_linux.a 00:02:46.043 LIB libspdk_accel_ioat.a 00:02:46.300 LIB libspdk_blob_bdev.a 00:02:46.300 LIB libspdk_accel_dsa.a 00:02:46.300 LIB libspdk_vfu_device.a 00:02:46.558 LIB libspdk_sock_posix.a 00:02:46.558 CC module/bdev/error/vbdev_error.o 00:02:46.558 CC module/bdev/error/vbdev_error_rpc.o 00:02:46.558 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:46.558 CC module/bdev/nvme/nvme_rpc.o 00:02:46.558 CC module/bdev/nvme/bdev_nvme.o 00:02:46.558 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:46.558 CC module/bdev/malloc/bdev_malloc.o 00:02:46.558 CC module/bdev/nvme/bdev_mdns_client.o 00:02:46.558 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:46.558 CC module/bdev/nvme/vbdev_opal.o 00:02:46.558 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:46.558 CC module/bdev/split/vbdev_split.o 00:02:46.558 CC module/bdev/split/vbdev_split_rpc.o 00:02:46.558 CC module/blobfs/bdev/blobfs_bdev.o 00:02:46.558 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:46.558 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:46.558 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:46.558 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:46.558 CC module/bdev/delay/vbdev_delay.o 00:02:46.558 CC module/bdev/gpt/gpt.o 00:02:46.558 CC module/bdev/null/bdev_null.o 00:02:46.558 CC module/bdev/gpt/vbdev_gpt.o 00:02:46.558 CC module/bdev/null/bdev_null_rpc.o 00:02:46.558 CC module/bdev/aio/bdev_aio.o 00:02:46.558 CC module/bdev/lvol/vbdev_lvol.o 00:02:46.558 CC module/bdev/aio/bdev_aio_rpc.o 00:02:46.558 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:46.558 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:46.558 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:46.558 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:46.558 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:46.558 CC module/bdev/ftl/bdev_ftl.o 00:02:46.558 CC module/bdev/iscsi/bdev_iscsi.o 00:02:46.558 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:46.558 CC module/bdev/raid/bdev_raid_rpc.o 00:02:46.558 CC module/bdev/raid/bdev_raid_sb.o 00:02:46.558 CC module/bdev/raid/bdev_raid.o 00:02:46.558 CC module/bdev/raid/raid0.o 00:02:46.558 CC module/bdev/raid/raid1.o 00:02:46.558 CC module/bdev/raid/concat.o 00:02:46.558 CC module/bdev/passthru/vbdev_passthru.o 00:02:46.558 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:46.816 LIB libspdk_blobfs_bdev.a 00:02:46.816 LIB libspdk_bdev_split.a 00:02:46.816 LIB libspdk_bdev_error.a 00:02:46.816 LIB libspdk_bdev_null.a 00:02:46.816 LIB libspdk_bdev_gpt.a 00:02:46.816 LIB libspdk_bdev_ftl.a 00:02:46.816 LIB libspdk_bdev_zone_block.a 00:02:46.816 LIB libspdk_bdev_aio.a 00:02:46.816 LIB libspdk_bdev_malloc.a 00:02:46.816 LIB libspdk_bdev_passthru.a 00:02:46.816 LIB libspdk_bdev_iscsi.a 00:02:46.816 LIB libspdk_bdev_delay.a 00:02:47.075 LIB libspdk_bdev_lvol.a 00:02:47.075 LIB libspdk_bdev_virtio.a 00:02:47.333 LIB libspdk_bdev_raid.a 00:02:47.902 LIB libspdk_bdev_nvme.a 00:02:48.472 CC module/event/subsystems/keyring/keyring.o 00:02:48.472 CC module/event/subsystems/vmd/vmd.o 00:02:48.472 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:48.472 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:48.472 CC module/event/subsystems/scheduler/scheduler.o 00:02:48.472 CC module/event/subsystems/sock/sock.o 00:02:48.472 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:48.472 CC module/event/subsystems/iobuf/iobuf.o 00:02:48.472 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:48.472 LIB libspdk_event_keyring.a 00:02:48.472 LIB libspdk_event_vmd.a 00:02:48.472 LIB libspdk_event_scheduler.a 00:02:48.472 LIB libspdk_event_vfu_tgt.a 00:02:48.472 LIB libspdk_event_vhost_blk.a 00:02:48.732 LIB libspdk_event_sock.a 00:02:48.732 LIB libspdk_event_iobuf.a 00:02:48.992 CC module/event/subsystems/accel/accel.o 00:02:48.992 LIB libspdk_event_accel.a 00:02:49.560 CC module/event/subsystems/bdev/bdev.o 00:02:49.560 LIB libspdk_event_bdev.a 00:02:49.819 CC module/event/subsystems/ublk/ublk.o 00:02:49.819 CC module/event/subsystems/scsi/scsi.o 00:02:49.819 CC module/event/subsystems/nbd/nbd.o 00:02:49.819 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:49.819 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:49.819 LIB libspdk_event_ublk.a 00:02:49.819 LIB libspdk_event_scsi.a 00:02:50.079 LIB libspdk_event_nbd.a 00:02:50.079 LIB libspdk_event_nvmf.a 00:02:50.338 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:50.338 CC module/event/subsystems/iscsi/iscsi.o 00:02:50.338 LIB libspdk_event_vhost_scsi.a 00:02:50.338 LIB libspdk_event_iscsi.a 00:02:50.598 TEST_HEADER include/spdk/accel_module.h 00:02:50.598 TEST_HEADER include/spdk/accel.h 00:02:50.598 TEST_HEADER include/spdk/barrier.h 00:02:50.598 TEST_HEADER include/spdk/assert.h 00:02:50.598 TEST_HEADER include/spdk/bdev.h 00:02:50.598 TEST_HEADER include/spdk/base64.h 00:02:50.598 TEST_HEADER include/spdk/bdev_module.h 00:02:50.598 TEST_HEADER include/spdk/bit_pool.h 00:02:50.598 TEST_HEADER include/spdk/bit_array.h 00:02:50.598 TEST_HEADER include/spdk/bdev_zone.h 00:02:50.598 TEST_HEADER include/spdk/blob_bdev.h 00:02:50.598 TEST_HEADER include/spdk/blobfs.h 00:02:50.598 TEST_HEADER include/spdk/blob.h 00:02:50.598 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:50.598 TEST_HEADER include/spdk/conf.h 00:02:50.598 CC app/spdk_nvme_identify/identify.o 00:02:50.598 CC test/rpc_client/rpc_client_test.o 00:02:50.598 TEST_HEADER include/spdk/config.h 00:02:50.598 TEST_HEADER include/spdk/cpuset.h 00:02:50.598 TEST_HEADER include/spdk/crc16.h 00:02:50.598 CC app/spdk_top/spdk_top.o 00:02:50.598 TEST_HEADER include/spdk/crc32.h 00:02:50.598 TEST_HEADER include/spdk/dif.h 00:02:50.598 TEST_HEADER include/spdk/crc64.h 00:02:50.598 TEST_HEADER include/spdk/endian.h 00:02:50.598 CC app/spdk_nvme_perf/perf.o 00:02:50.598 TEST_HEADER include/spdk/env_dpdk.h 00:02:50.598 TEST_HEADER include/spdk/dma.h 00:02:50.598 TEST_HEADER include/spdk/env.h 00:02:50.598 TEST_HEADER include/spdk/event.h 00:02:50.598 TEST_HEADER include/spdk/fd.h 00:02:50.598 TEST_HEADER include/spdk/file.h 00:02:50.598 TEST_HEADER include/spdk/ftl.h 00:02:50.598 TEST_HEADER include/spdk/fd_group.h 00:02:50.598 CC app/trace_record/trace_record.o 00:02:50.598 TEST_HEADER include/spdk/hexlify.h 00:02:50.598 TEST_HEADER include/spdk/gpt_spec.h 00:02:50.598 TEST_HEADER include/spdk/histogram_data.h 00:02:50.598 TEST_HEADER include/spdk/idxd.h 00:02:50.598 TEST_HEADER include/spdk/idxd_spec.h 00:02:50.598 TEST_HEADER include/spdk/init.h 00:02:50.598 CXX app/trace/trace.o 00:02:50.598 CC app/spdk_nvme_discover/discovery_aer.o 00:02:50.598 TEST_HEADER include/spdk/ioat_spec.h 00:02:50.598 TEST_HEADER include/spdk/ioat.h 00:02:50.598 TEST_HEADER include/spdk/iscsi_spec.h 00:02:50.598 TEST_HEADER include/spdk/jsonrpc.h 00:02:50.598 TEST_HEADER include/spdk/json.h 00:02:50.598 TEST_HEADER include/spdk/keyring.h 00:02:50.598 TEST_HEADER include/spdk/keyring_module.h 00:02:50.598 CC app/spdk_lspci/spdk_lspci.o 00:02:50.598 TEST_HEADER include/spdk/likely.h 00:02:50.598 TEST_HEADER include/spdk/log.h 00:02:50.598 TEST_HEADER include/spdk/lvol.h 00:02:50.598 TEST_HEADER include/spdk/memory.h 00:02:50.598 TEST_HEADER include/spdk/mmio.h 00:02:50.598 TEST_HEADER include/spdk/notify.h 00:02:50.598 TEST_HEADER include/spdk/net.h 00:02:50.598 TEST_HEADER include/spdk/nbd.h 00:02:50.598 TEST_HEADER include/spdk/nvme.h 00:02:50.864 TEST_HEADER include/spdk/nvme_intel.h 00:02:50.864 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:50.864 TEST_HEADER include/spdk/nvme_spec.h 00:02:50.864 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:50.864 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:50.864 TEST_HEADER include/spdk/nvme_zns.h 00:02:50.864 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:50.864 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:50.864 TEST_HEADER include/spdk/nvmf.h 00:02:50.864 TEST_HEADER include/spdk/nvmf_transport.h 00:02:50.865 TEST_HEADER include/spdk/opal.h 00:02:50.865 TEST_HEADER include/spdk/nvmf_spec.h 00:02:50.865 TEST_HEADER include/spdk/opal_spec.h 00:02:50.865 TEST_HEADER include/spdk/pci_ids.h 00:02:50.865 TEST_HEADER include/spdk/reduce.h 00:02:50.865 TEST_HEADER include/spdk/pipe.h 00:02:50.865 TEST_HEADER include/spdk/rpc.h 00:02:50.865 TEST_HEADER include/spdk/queue.h 00:02:50.865 TEST_HEADER include/spdk/scsi.h 00:02:50.865 TEST_HEADER include/spdk/scheduler.h 00:02:50.865 TEST_HEADER include/spdk/sock.h 00:02:50.865 CC app/iscsi_tgt/iscsi_tgt.o 00:02:50.865 TEST_HEADER include/spdk/stdinc.h 00:02:50.865 TEST_HEADER include/spdk/scsi_spec.h 00:02:50.865 TEST_HEADER include/spdk/trace.h 00:02:50.865 TEST_HEADER include/spdk/string.h 00:02:50.865 TEST_HEADER include/spdk/thread.h 00:02:50.865 TEST_HEADER include/spdk/tree.h 00:02:50.865 TEST_HEADER include/spdk/trace_parser.h 00:02:50.865 TEST_HEADER include/spdk/ublk.h 00:02:50.865 TEST_HEADER include/spdk/uuid.h 00:02:50.865 TEST_HEADER include/spdk/util.h 00:02:50.865 TEST_HEADER include/spdk/version.h 00:02:50.865 CC app/nvmf_tgt/nvmf_main.o 00:02:50.865 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:50.865 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:50.865 TEST_HEADER include/spdk/vhost.h 00:02:50.865 TEST_HEADER include/spdk/vmd.h 00:02:50.865 CC app/spdk_dd/spdk_dd.o 00:02:50.865 TEST_HEADER include/spdk/xor.h 00:02:50.865 TEST_HEADER include/spdk/zipf.h 00:02:50.865 CXX test/cpp_headers/accel.o 00:02:50.865 CXX test/cpp_headers/accel_module.o 00:02:50.865 CXX test/cpp_headers/barrier.o 00:02:50.865 CXX test/cpp_headers/assert.o 00:02:50.865 CXX test/cpp_headers/base64.o 00:02:50.865 CXX test/cpp_headers/bdev.o 00:02:50.865 CXX test/cpp_headers/bdev_module.o 00:02:50.865 CXX test/cpp_headers/bdev_zone.o 00:02:50.865 CXX test/cpp_headers/bit_array.o 00:02:50.865 CXX test/cpp_headers/bit_pool.o 00:02:50.865 CXX test/cpp_headers/blob_bdev.o 00:02:50.865 CXX test/cpp_headers/blobfs_bdev.o 00:02:50.865 CXX test/cpp_headers/blobfs.o 00:02:50.865 CC test/env/vtophys/vtophys.o 00:02:50.865 CXX test/cpp_headers/config.o 00:02:50.865 CXX test/cpp_headers/blob.o 00:02:50.865 CXX test/cpp_headers/conf.o 00:02:50.865 CXX test/cpp_headers/crc16.o 00:02:50.865 CXX test/cpp_headers/cpuset.o 00:02:50.865 CXX test/cpp_headers/dif.o 00:02:50.865 CXX test/cpp_headers/crc32.o 00:02:50.865 CXX test/cpp_headers/crc64.o 00:02:50.865 CXX test/cpp_headers/endian.o 00:02:50.865 CXX test/cpp_headers/dma.o 00:02:50.865 CXX test/cpp_headers/env_dpdk.o 00:02:50.865 CC app/spdk_tgt/spdk_tgt.o 00:02:50.865 CXX test/cpp_headers/event.o 00:02:50.865 CXX test/cpp_headers/fd.o 00:02:50.865 CXX test/cpp_headers/fd_group.o 00:02:50.865 CXX test/cpp_headers/env.o 00:02:50.865 CXX test/cpp_headers/ftl.o 00:02:50.865 CXX test/cpp_headers/file.o 00:02:50.865 CC test/env/pci/pci_ut.o 00:02:50.865 CXX test/cpp_headers/gpt_spec.o 00:02:50.865 CXX test/cpp_headers/hexlify.o 00:02:50.865 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:50.865 CXX test/cpp_headers/histogram_data.o 00:02:50.865 CXX test/cpp_headers/idxd.o 00:02:50.865 CXX test/cpp_headers/idxd_spec.o 00:02:50.865 CXX test/cpp_headers/init.o 00:02:50.865 CXX test/cpp_headers/ioat_spec.o 00:02:50.865 CXX test/cpp_headers/ioat.o 00:02:50.865 CXX test/cpp_headers/iscsi_spec.o 00:02:50.865 CXX test/cpp_headers/json.o 00:02:50.865 CXX test/cpp_headers/keyring.o 00:02:50.865 CXX test/cpp_headers/jsonrpc.o 00:02:50.865 CXX test/cpp_headers/likely.o 00:02:50.865 CXX test/cpp_headers/keyring_module.o 00:02:50.865 CXX test/cpp_headers/log.o 00:02:50.865 CXX test/cpp_headers/memory.o 00:02:50.865 CXX test/cpp_headers/lvol.o 00:02:50.865 CXX test/cpp_headers/mmio.o 00:02:50.865 CXX test/cpp_headers/nbd.o 00:02:50.865 CXX test/cpp_headers/net.o 00:02:50.865 CXX test/cpp_headers/notify.o 00:02:50.865 CXX test/cpp_headers/nvme.o 00:02:50.865 CXX test/cpp_headers/nvme_intel.o 00:02:50.865 CXX test/cpp_headers/nvme_ocssd.o 00:02:50.865 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:50.865 CXX test/cpp_headers/nvme_zns.o 00:02:50.865 CXX test/cpp_headers/nvme_spec.o 00:02:50.865 CXX test/cpp_headers/nvmf_cmd.o 00:02:50.865 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:50.865 CXX test/cpp_headers/nvmf.o 00:02:50.865 CXX test/cpp_headers/nvmf_transport.o 00:02:50.865 CXX test/cpp_headers/nvmf_spec.o 00:02:50.865 CC examples/ioat/verify/verify.o 00:02:50.865 CXX test/cpp_headers/opal.o 00:02:50.865 CC test/env/memory/memory_ut.o 00:02:50.865 CXX test/cpp_headers/opal_spec.o 00:02:50.865 CXX test/cpp_headers/pci_ids.o 00:02:50.865 CC examples/util/zipf/zipf.o 00:02:50.865 CXX test/cpp_headers/pipe.o 00:02:50.865 CXX test/cpp_headers/queue.o 00:02:50.865 CXX test/cpp_headers/reduce.o 00:02:50.865 CC test/thread/poller_perf/poller_perf.o 00:02:50.865 CXX test/cpp_headers/scheduler.o 00:02:50.865 CXX test/cpp_headers/rpc.o 00:02:50.865 CXX test/cpp_headers/scsi.o 00:02:50.865 CXX test/cpp_headers/scsi_spec.o 00:02:50.865 CXX test/cpp_headers/sock.o 00:02:50.865 CXX test/cpp_headers/stdinc.o 00:02:50.865 CC test/app/stub/stub.o 00:02:50.865 CXX test/cpp_headers/string.o 00:02:50.865 CC test/app/jsoncat/jsoncat.o 00:02:50.865 CXX test/cpp_headers/thread.o 00:02:50.865 CXX test/cpp_headers/trace.o 00:02:50.865 CXX test/cpp_headers/trace_parser.o 00:02:50.865 CXX test/cpp_headers/tree.o 00:02:50.865 CXX test/cpp_headers/util.o 00:02:50.865 CXX test/cpp_headers/ublk.o 00:02:50.865 CC app/fio/nvme/fio_plugin.o 00:02:50.865 CC examples/ioat/perf/perf.o 00:02:50.865 CC test/thread/lock/spdk_lock.o 00:02:50.865 CC test/app/histogram_perf/histogram_perf.o 00:02:50.865 CXX test/cpp_headers/uuid.o 00:02:50.865 LINK spdk_lspci 00:02:50.865 CC test/dma/test_dma/test_dma.o 00:02:50.865 CXX test/cpp_headers/version.o 00:02:50.865 LINK rpc_client_test 00:02:50.865 CC app/fio/bdev/fio_plugin.o 00:02:50.865 CC test/app/bdev_svc/bdev_svc.o 00:02:50.865 CXX test/cpp_headers/vfio_user_pci.o 00:02:50.865 CC test/env/mem_callbacks/mem_callbacks.o 00:02:50.865 LINK spdk_trace_record 00:02:50.865 LINK spdk_nvme_discover 00:02:50.865 LINK vtophys 00:02:50.865 LINK interrupt_tgt 00:02:50.865 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:50.865 CXX test/cpp_headers/vfio_user_spec.o 00:02:50.865 CXX test/cpp_headers/vhost.o 00:02:51.124 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:51.124 CXX test/cpp_headers/vmd.o 00:02:51.124 CXX test/cpp_headers/xor.o 00:02:51.124 CXX test/cpp_headers/zipf.o 00:02:51.124 LINK env_dpdk_post_init 00:02:51.124 LINK iscsi_tgt 00:02:51.124 LINK jsoncat 00:02:51.124 LINK nvmf_tgt 00:02:51.124 LINK poller_perf 00:02:51.124 LINK zipf 00:02:51.124 LINK histogram_perf 00:02:51.124 LINK verify 00:02:51.124 LINK stub 00:02:51.124 LINK spdk_tgt 00:02:51.124 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:51.124 LINK ioat_perf 00:02:51.125 fio_plugin.c:1582:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:02:51.125 struct spdk_nvme_fdp_ruhs ruhs; 00:02:51.125 ^ 00:02:51.125 LINK bdev_svc 00:02:51.125 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:51.125 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:51.125 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:51.125 LINK spdk_trace 00:02:51.125 LINK pci_ut 00:02:51.125 LINK test_dma 00:02:51.383 LINK nvme_fuzz 00:02:51.383 1 warning generated. 00:02:51.383 LINK spdk_dd 00:02:51.383 LINK spdk_nvme_identify 00:02:51.383 LINK spdk_bdev 00:02:51.383 LINK mem_callbacks 00:02:51.383 LINK spdk_nvme 00:02:51.383 LINK spdk_nvme_perf 00:02:51.383 LINK llvm_vfio_fuzz 00:02:51.383 LINK vhost_fuzz 00:02:51.383 LINK spdk_top 00:02:51.641 LINK llvm_nvme_fuzz 00:02:51.641 CC app/vhost/vhost.o 00:02:51.641 CC examples/sock/hello_world/hello_sock.o 00:02:51.641 CC examples/idxd/perf/perf.o 00:02:51.641 CC examples/vmd/led/led.o 00:02:51.641 CC examples/vmd/lsvmd/lsvmd.o 00:02:51.641 LINK memory_ut 00:02:51.641 CC examples/thread/thread/thread_ex.o 00:02:51.900 LINK lsvmd 00:02:51.900 LINK led 00:02:51.900 LINK vhost 00:02:51.900 LINK hello_sock 00:02:51.900 LINK spdk_lock 00:02:51.900 LINK idxd_perf 00:02:51.900 LINK thread 00:02:52.158 LINK iscsi_fuzz 00:02:52.416 CC test/event/event_perf/event_perf.o 00:02:52.675 CC examples/nvme/abort/abort.o 00:02:52.675 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:52.675 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:52.675 CC examples/nvme/reconnect/reconnect.o 00:02:52.675 CC test/event/reactor_perf/reactor_perf.o 00:02:52.675 CC examples/nvme/hello_world/hello_world.o 00:02:52.675 CC examples/nvme/hotplug/hotplug.o 00:02:52.675 CC examples/nvme/arbitration/arbitration.o 00:02:52.675 CC test/event/reactor/reactor.o 00:02:52.675 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:52.675 CC test/event/app_repeat/app_repeat.o 00:02:52.675 CC test/event/scheduler/scheduler.o 00:02:52.675 LINK event_perf 00:02:52.675 LINK reactor 00:02:52.675 LINK reactor_perf 00:02:52.675 LINK pmr_persistence 00:02:52.675 LINK cmb_copy 00:02:52.675 LINK app_repeat 00:02:52.675 LINK hello_world 00:02:52.675 LINK hotplug 00:02:52.675 LINK scheduler 00:02:52.675 LINK reconnect 00:02:52.675 LINK abort 00:02:52.933 LINK arbitration 00:02:52.933 LINK nvme_manage 00:02:52.933 CC test/nvme/overhead/overhead.o 00:02:52.933 CC test/nvme/sgl/sgl.o 00:02:52.933 CC test/nvme/connect_stress/connect_stress.o 00:02:52.933 CC test/nvme/boot_partition/boot_partition.o 00:02:52.933 CC test/nvme/aer/aer.o 00:02:52.933 CC test/nvme/simple_copy/simple_copy.o 00:02:52.933 CC test/nvme/startup/startup.o 00:02:52.933 CC test/nvme/e2edp/nvme_dp.o 00:02:52.933 CC test/nvme/err_injection/err_injection.o 00:02:52.933 CC test/nvme/fused_ordering/fused_ordering.o 00:02:52.933 CC test/nvme/cuse/cuse.o 00:02:52.933 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:52.933 CC test/nvme/reserve/reserve.o 00:02:52.933 CC test/nvme/reset/reset.o 00:02:52.933 CC test/accel/dif/dif.o 00:02:52.933 CC test/nvme/compliance/nvme_compliance.o 00:02:52.933 CC test/blobfs/mkfs/mkfs.o 00:02:52.933 CC test/nvme/fdp/fdp.o 00:02:53.192 LINK boot_partition 00:02:53.192 LINK connect_stress 00:02:53.192 LINK startup 00:02:53.192 CC test/lvol/esnap/esnap.o 00:02:53.192 LINK doorbell_aers 00:02:53.192 LINK err_injection 00:02:53.192 LINK fused_ordering 00:02:53.192 LINK reserve 00:02:53.192 LINK simple_copy 00:02:53.192 LINK mkfs 00:02:53.192 LINK overhead 00:02:53.192 LINK aer 00:02:53.192 LINK sgl 00:02:53.192 LINK nvme_dp 00:02:53.192 LINK reset 00:02:53.192 LINK fdp 00:02:53.450 LINK nvme_compliance 00:02:53.450 LINK dif 00:02:53.708 CC examples/accel/perf/accel_perf.o 00:02:53.708 CC examples/blob/cli/blobcli.o 00:02:53.708 CC examples/blob/hello_world/hello_blob.o 00:02:53.966 LINK hello_blob 00:02:53.966 LINK cuse 00:02:53.966 LINK accel_perf 00:02:53.966 LINK blobcli 00:02:54.901 CC examples/bdev/bdevperf/bdevperf.o 00:02:54.901 CC examples/bdev/hello_world/hello_bdev.o 00:02:54.901 LINK hello_bdev 00:02:54.901 CC test/bdev/bdevio/bdevio.o 00:02:55.160 LINK bdevperf 00:02:55.160 LINK bdevio 00:02:56.536 LINK esnap 00:02:56.536 CC examples/nvmf/nvmf/nvmf.o 00:02:56.795 LINK nvmf 00:02:58.175 00:02:58.175 real 0m45.426s 00:02:58.175 user 5m35.005s 00:02:58.175 sys 2m29.274s 00:02:58.175 20:53:25 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:58.175 20:53:25 make -- common/autotest_common.sh@10 -- $ set +x 00:02:58.175 ************************************ 00:02:58.175 END TEST make 00:02:58.175 ************************************ 00:02:58.175 20:53:25 -- common/autotest_common.sh@1142 -- $ return 0 00:02:58.175 20:53:25 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:58.175 20:53:25 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:58.175 20:53:25 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:58.175 20:53:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.175 20:53:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:58.175 20:53:25 -- pm/common@44 -- $ pid=651169 00:02:58.175 20:53:25 -- pm/common@50 -- $ kill -TERM 651169 00:02:58.175 20:53:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.175 20:53:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:58.175 20:53:25 -- pm/common@44 -- $ pid=651171 00:02:58.175 20:53:25 -- pm/common@50 -- $ kill -TERM 651171 00:02:58.175 20:53:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.175 20:53:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:58.175 20:53:25 -- pm/common@44 -- $ pid=651173 00:02:58.175 20:53:25 -- pm/common@50 -- $ kill -TERM 651173 00:02:58.175 20:53:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.175 20:53:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:58.175 20:53:25 -- pm/common@44 -- $ pid=651196 00:02:58.175 20:53:25 -- pm/common@50 -- $ sudo -E kill -TERM 651196 00:02:58.175 20:53:25 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:02:58.175 20:53:25 -- nvmf/common.sh@7 -- # uname -s 00:02:58.175 20:53:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:58.175 20:53:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:58.175 20:53:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:58.175 20:53:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:58.175 20:53:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:58.175 20:53:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:58.175 20:53:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:58.175 20:53:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:58.175 20:53:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:58.175 20:53:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:58.175 20:53:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:02:58.175 20:53:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:02:58.175 20:53:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:58.175 20:53:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:58.175 20:53:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:58.175 20:53:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:58.175 20:53:25 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:58.175 20:53:25 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:58.175 20:53:25 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:58.175 20:53:25 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:58.175 20:53:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.175 20:53:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.175 20:53:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.175 20:53:25 -- paths/export.sh@5 -- # export PATH 00:02:58.175 20:53:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.175 20:53:25 -- nvmf/common.sh@47 -- # : 0 00:02:58.175 20:53:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:58.175 20:53:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:58.175 20:53:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:58.175 20:53:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:58.175 20:53:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:58.175 20:53:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:58.175 20:53:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:58.175 20:53:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:58.175 20:53:25 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:58.175 20:53:25 -- spdk/autotest.sh@32 -- # uname -s 00:02:58.175 20:53:25 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:58.175 20:53:25 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:58.175 20:53:25 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:58.175 20:53:25 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:58.175 20:53:25 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:58.175 20:53:25 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:58.434 20:53:25 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:58.434 20:53:25 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:58.434 20:53:25 -- spdk/autotest.sh@48 -- # udevadm_pid=713184 00:02:58.434 20:53:25 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:58.434 20:53:25 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:58.434 20:53:25 -- pm/common@17 -- # local monitor 00:02:58.434 20:53:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.434 20:53:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.434 20:53:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.434 20:53:25 -- pm/common@21 -- # date +%s 00:02:58.434 20:53:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.434 20:53:25 -- pm/common@21 -- # date +%s 00:02:58.434 20:53:25 -- pm/common@25 -- # sleep 1 00:02:58.434 20:53:25 -- pm/common@21 -- # date +%s 00:02:58.434 20:53:25 -- pm/common@21 -- # date +%s 00:02:58.434 20:53:25 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721069605 00:02:58.434 20:53:25 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721069605 00:02:58.434 20:53:25 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721069605 00:02:58.434 20:53:25 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721069605 00:02:58.434 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721069605_collect-vmstat.pm.log 00:02:58.434 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721069605_collect-cpu-load.pm.log 00:02:58.434 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721069605_collect-cpu-temp.pm.log 00:02:58.434 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721069605_collect-bmc-pm.bmc.pm.log 00:02:59.372 20:53:26 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:59.372 20:53:26 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:59.372 20:53:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:59.372 20:53:26 -- common/autotest_common.sh@10 -- # set +x 00:02:59.372 20:53:26 -- spdk/autotest.sh@59 -- # create_test_list 00:02:59.372 20:53:26 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:59.372 20:53:26 -- common/autotest_common.sh@10 -- # set +x 00:02:59.372 20:53:26 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:02:59.372 20:53:26 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:59.372 20:53:26 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:59.372 20:53:26 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:59.372 20:53:26 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:59.372 20:53:26 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:59.372 20:53:26 -- common/autotest_common.sh@1455 -- # uname 00:02:59.372 20:53:26 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:59.372 20:53:26 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:59.372 20:53:26 -- common/autotest_common.sh@1475 -- # uname 00:02:59.372 20:53:26 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:59.372 20:53:26 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:59.372 20:53:26 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:02:59.372 20:53:26 -- spdk/autotest.sh@72 -- # hash lcov 00:02:59.372 20:53:26 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:02:59.372 20:53:26 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:59.372 20:53:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:59.372 20:53:26 -- common/autotest_common.sh@10 -- # set +x 00:02:59.372 20:53:26 -- spdk/autotest.sh@91 -- # rm -f 00:02:59.372 20:53:26 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:02.656 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:02.656 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:02.656 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:02.656 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:02.656 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:02.656 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:02.656 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:02.656 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:02.656 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:02.915 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:02.915 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:02.915 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:02.915 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:02.915 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:02.915 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:02.915 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:02.915 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:02.915 20:53:30 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:02.915 20:53:30 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:02.915 20:53:30 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:02.915 20:53:30 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:02.915 20:53:30 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:02.915 20:53:30 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:02.915 20:53:30 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:02.915 20:53:30 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:02.915 20:53:30 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:02.915 20:53:30 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:02.915 20:53:30 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:02.915 20:53:30 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:02.915 20:53:30 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:02.915 20:53:30 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:02.915 20:53:30 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:03.173 No valid GPT data, bailing 00:03:03.173 20:53:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:03.173 20:53:30 -- scripts/common.sh@391 -- # pt= 00:03:03.173 20:53:30 -- scripts/common.sh@392 -- # return 1 00:03:03.173 20:53:30 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:03.173 1+0 records in 00:03:03.173 1+0 records out 00:03:03.173 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00502313 s, 209 MB/s 00:03:03.173 20:53:30 -- spdk/autotest.sh@118 -- # sync 00:03:03.173 20:53:30 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:03.173 20:53:30 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:03.173 20:53:30 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:09.748 20:53:36 -- spdk/autotest.sh@124 -- # uname -s 00:03:09.748 20:53:36 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:09.748 20:53:36 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:09.748 20:53:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:09.748 20:53:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:09.748 20:53:36 -- common/autotest_common.sh@10 -- # set +x 00:03:09.748 ************************************ 00:03:09.748 START TEST setup.sh 00:03:09.748 ************************************ 00:03:09.748 20:53:36 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:09.748 * Looking for test storage... 00:03:09.748 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:09.748 20:53:36 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:09.748 20:53:36 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:09.748 20:53:36 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:09.748 20:53:36 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:09.748 20:53:36 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:09.748 20:53:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:09.748 ************************************ 00:03:09.748 START TEST acl 00:03:09.748 ************************************ 00:03:09.748 20:53:36 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:09.748 * Looking for test storage... 00:03:09.748 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:09.748 20:53:36 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:09.748 20:53:36 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:09.748 20:53:36 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:09.748 20:53:36 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:09.748 20:53:36 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:09.748 20:53:36 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:09.748 20:53:36 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:09.748 20:53:36 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:09.748 20:53:36 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:09.748 20:53:36 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:09.748 20:53:36 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:09.748 20:53:36 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:09.748 20:53:36 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:09.748 20:53:36 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:09.748 20:53:36 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:09.748 20:53:36 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:14.049 20:53:40 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:14.049 20:53:40 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:14.049 20:53:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:14.049 20:53:40 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:14.049 20:53:40 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.049 20:53:40 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:03:16.586 Hugepages 00:03:16.586 node hugesize free / total 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.586 00:03:16.586 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.586 20:53:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:16.587 20:53:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.587 20:53:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.587 20:53:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.587 20:53:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:16.587 20:53:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.587 20:53:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.587 20:53:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.587 20:53:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:16.587 20:53:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.587 20:53:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.587 20:53:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.587 20:53:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:16.587 20:53:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.587 20:53:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.587 20:53:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.587 20:53:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:16.587 20:53:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.587 20:53:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.587 20:53:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.587 20:53:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:16.587 20:53:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.587 20:53:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.587 20:53:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.587 20:53:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:16.587 20:53:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:16.587 20:53:43 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:16.587 20:53:43 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:16.587 20:53:43 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:16.587 20:53:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.587 20:53:43 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:16.587 20:53:43 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:16.587 20:53:43 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:16.587 20:53:43 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:16.587 20:53:43 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:16.587 ************************************ 00:03:16.587 START TEST denied 00:03:16.587 ************************************ 00:03:16.587 20:53:43 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:16.587 20:53:43 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:16.587 20:53:43 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:16.587 20:53:43 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:16.587 20:53:43 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.587 20:53:43 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:20.782 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:20.782 20:53:47 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:20.782 20:53:47 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:20.782 20:53:47 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:20.782 20:53:47 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:20.782 20:53:47 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:20.782 20:53:47 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:20.782 20:53:47 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:20.782 20:53:47 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:20.782 20:53:47 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:20.782 20:53:47 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:24.069 00:03:24.069 real 0m7.199s 00:03:24.070 user 0m2.136s 00:03:24.070 sys 0m4.226s 00:03:24.070 20:53:51 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:24.070 20:53:51 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:24.070 ************************************ 00:03:24.070 END TEST denied 00:03:24.070 ************************************ 00:03:24.070 20:53:51 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:24.070 20:53:51 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:24.070 20:53:51 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:24.070 20:53:51 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:24.070 20:53:51 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:24.070 ************************************ 00:03:24.070 START TEST allowed 00:03:24.070 ************************************ 00:03:24.070 20:53:51 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:24.070 20:53:51 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:24.070 20:53:51 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:24.070 20:53:51 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.070 20:53:51 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:24.070 20:53:51 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:29.340 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:29.340 20:53:56 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:29.340 20:53:56 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:29.340 20:53:56 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:29.340 20:53:56 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:29.340 20:53:56 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:32.626 00:03:32.626 real 0m8.365s 00:03:32.626 user 0m2.274s 00:03:32.626 sys 0m4.563s 00:03:32.626 20:53:59 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:32.626 20:53:59 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:32.626 ************************************ 00:03:32.626 END TEST allowed 00:03:32.626 ************************************ 00:03:32.626 20:53:59 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:32.626 00:03:32.626 real 0m22.844s 00:03:32.626 user 0m7.085s 00:03:32.626 sys 0m13.687s 00:03:32.626 20:53:59 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:32.626 20:53:59 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:32.626 ************************************ 00:03:32.626 END TEST acl 00:03:32.626 ************************************ 00:03:32.626 20:53:59 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:32.626 20:53:59 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:32.626 20:53:59 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:32.626 20:53:59 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:32.626 20:53:59 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:32.626 ************************************ 00:03:32.626 START TEST hugepages 00:03:32.626 ************************************ 00:03:32.626 20:53:59 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:32.626 * Looking for test storage... 00:03:32.626 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 41688864 kB' 'MemAvailable: 43996268 kB' 'Buffers: 11496 kB' 'Cached: 10291692 kB' 'SwapCached: 16 kB' 'Active: 8604104 kB' 'Inactive: 2283636 kB' 'Active(anon): 8128976 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 587908 kB' 'Mapped: 190532 kB' 'Shmem: 7623248 kB' 'KReclaimable: 249188 kB' 'Slab: 796508 kB' 'SReclaimable: 249188 kB' 'SUnreclaim: 547320 kB' 'KernelStack: 21872 kB' 'PageTables: 8464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439068 kB' 'Committed_AS: 9584644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213508 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.626 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:32.627 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:32.628 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.628 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:32.628 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.628 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:32.628 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:32.628 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:32.628 20:53:59 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:32.628 20:53:59 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:32.628 20:53:59 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:32.628 20:53:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:32.628 ************************************ 00:03:32.628 START TEST default_setup 00:03:32.628 ************************************ 00:03:32.628 20:53:59 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:32.628 20:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:32.628 20:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:32.628 20:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:32.628 20:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:32.628 20:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:32.628 20:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:32.628 20:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:32.628 20:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:32.628 20:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:32.628 20:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:32.628 20:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.628 20:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:32.628 20:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:32.628 20:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.628 20:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.628 20:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:32.628 20:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:32.628 20:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:32.628 20:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:32.628 20:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:32.628 20:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.628 20:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:35.911 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:35.911 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:35.911 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:35.911 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:35.911 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:35.911 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:35.911 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:35.911 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:35.911 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:35.911 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:35.911 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:35.911 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:35.911 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:35.911 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:35.911 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:35.911 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:37.821 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43793132 kB' 'MemAvailable: 46100520 kB' 'Buffers: 11496 kB' 'Cached: 10291828 kB' 'SwapCached: 16 kB' 'Active: 8630864 kB' 'Inactive: 2283636 kB' 'Active(anon): 8155736 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613916 kB' 'Mapped: 191264 kB' 'Shmem: 7623384 kB' 'KReclaimable: 249156 kB' 'Slab: 794924 kB' 'SReclaimable: 249156 kB' 'SUnreclaim: 545768 kB' 'KernelStack: 22192 kB' 'PageTables: 9080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9611756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213608 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.821 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43796444 kB' 'MemAvailable: 46103832 kB' 'Buffers: 11496 kB' 'Cached: 10291832 kB' 'SwapCached: 16 kB' 'Active: 8625772 kB' 'Inactive: 2283636 kB' 'Active(anon): 8150644 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 608928 kB' 'Mapped: 191096 kB' 'Shmem: 7623388 kB' 'KReclaimable: 249156 kB' 'Slab: 794916 kB' 'SReclaimable: 249156 kB' 'SUnreclaim: 545760 kB' 'KernelStack: 22192 kB' 'PageTables: 9388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9606672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213604 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.822 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.823 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43796564 kB' 'MemAvailable: 46103952 kB' 'Buffers: 11496 kB' 'Cached: 10291832 kB' 'SwapCached: 16 kB' 'Active: 8626616 kB' 'Inactive: 2283636 kB' 'Active(anon): 8151488 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 609876 kB' 'Mapped: 190752 kB' 'Shmem: 7623388 kB' 'KReclaimable: 249156 kB' 'Slab: 794920 kB' 'SReclaimable: 249156 kB' 'SUnreclaim: 545764 kB' 'KernelStack: 22208 kB' 'PageTables: 9044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9605308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213620 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:37.825 nr_hugepages=1024 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:37.825 resv_hugepages=0 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:37.825 surplus_hugepages=0 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:37.825 anon_hugepages=0 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43795436 kB' 'MemAvailable: 46102824 kB' 'Buffers: 11496 kB' 'Cached: 10291832 kB' 'SwapCached: 16 kB' 'Active: 8625960 kB' 'Inactive: 2283636 kB' 'Active(anon): 8150832 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 609752 kB' 'Mapped: 190684 kB' 'Shmem: 7623388 kB' 'KReclaimable: 249156 kB' 'Slab: 794840 kB' 'SReclaimable: 249156 kB' 'SUnreclaim: 545684 kB' 'KernelStack: 22320 kB' 'PageTables: 9096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9606716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213732 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:37.826 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 25918924 kB' 'MemUsed: 6673160 kB' 'SwapCached: 16 kB' 'Active: 2901580 kB' 'Inactive: 180800 kB' 'Active(anon): 2684960 kB' 'Inactive(anon): 16 kB' 'Active(file): 216620 kB' 'Inactive(file): 180784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2875748 kB' 'Mapped: 128516 kB' 'AnonPages: 210556 kB' 'Shmem: 2478328 kB' 'KernelStack: 12856 kB' 'PageTables: 5220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134432 kB' 'Slab: 391028 kB' 'SReclaimable: 134432 kB' 'SUnreclaim: 256596 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:37.827 node0=1024 expecting 1024 00:03:37.827 20:54:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:37.827 00:03:37.827 real 0m5.064s 00:03:37.827 user 0m1.268s 00:03:37.828 sys 0m2.398s 00:03:37.828 20:54:04 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:37.828 20:54:04 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:37.828 ************************************ 00:03:37.828 END TEST default_setup 00:03:37.828 ************************************ 00:03:37.828 20:54:04 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:37.828 20:54:04 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:37.828 20:54:04 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:37.828 20:54:04 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:37.828 20:54:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:37.828 ************************************ 00:03:37.828 START TEST per_node_1G_alloc 00:03:37.828 ************************************ 00:03:37.828 20:54:04 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:37.828 20:54:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:37.828 20:54:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:37.828 20:54:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:37.828 20:54:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:37.828 20:54:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:37.828 20:54:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:37.828 20:54:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:37.828 20:54:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:37.828 20:54:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:37.828 20:54:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:37.828 20:54:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:37.828 20:54:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:37.828 20:54:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:37.828 20:54:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:37.828 20:54:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:37.828 20:54:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:37.828 20:54:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:37.828 20:54:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:37.828 20:54:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:37.828 20:54:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:37.828 20:54:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:37.828 20:54:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:37.828 20:54:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:37.828 20:54:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:37.828 20:54:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:37.828 20:54:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.828 20:54:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:40.355 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:40.355 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:40.355 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:40.355 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:40.355 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:40.355 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:40.355 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:40.355 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:40.355 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:40.355 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:40.355 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:40.355 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:40.355 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:40.355 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:40.355 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:40.355 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:40.355 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43829228 kB' 'MemAvailable: 46136616 kB' 'Buffers: 11496 kB' 'Cached: 10291976 kB' 'SwapCached: 16 kB' 'Active: 8622652 kB' 'Inactive: 2283636 kB' 'Active(anon): 8147524 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 606000 kB' 'Mapped: 190676 kB' 'Shmem: 7623532 kB' 'KReclaimable: 249156 kB' 'Slab: 795100 kB' 'SReclaimable: 249156 kB' 'SUnreclaim: 545944 kB' 'KernelStack: 21904 kB' 'PageTables: 8340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9600164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213588 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.618 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.619 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43829852 kB' 'MemAvailable: 46137240 kB' 'Buffers: 11496 kB' 'Cached: 10291980 kB' 'SwapCached: 16 kB' 'Active: 8623288 kB' 'Inactive: 2283636 kB' 'Active(anon): 8148160 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 606620 kB' 'Mapped: 190676 kB' 'Shmem: 7623536 kB' 'KReclaimable: 249156 kB' 'Slab: 795140 kB' 'SReclaimable: 249156 kB' 'SUnreclaim: 545984 kB' 'KernelStack: 21968 kB' 'PageTables: 8556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9601300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213572 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.620 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43830608 kB' 'MemAvailable: 46137996 kB' 'Buffers: 11496 kB' 'Cached: 10292008 kB' 'SwapCached: 16 kB' 'Active: 8623876 kB' 'Inactive: 2283636 kB' 'Active(anon): 8148748 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607208 kB' 'Mapped: 190676 kB' 'Shmem: 7623564 kB' 'KReclaimable: 249156 kB' 'Slab: 795140 kB' 'SReclaimable: 249156 kB' 'SUnreclaim: 545984 kB' 'KernelStack: 21984 kB' 'PageTables: 8616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9601324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213556 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:40.623 nr_hugepages=1024 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:40.623 resv_hugepages=0 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:40.623 surplus_hugepages=0 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:40.623 anon_hugepages=0 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.623 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43830696 kB' 'MemAvailable: 46138084 kB' 'Buffers: 11496 kB' 'Cached: 10292020 kB' 'SwapCached: 16 kB' 'Active: 8623664 kB' 'Inactive: 2283636 kB' 'Active(anon): 8148536 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607000 kB' 'Mapped: 190692 kB' 'Shmem: 7623576 kB' 'KReclaimable: 249156 kB' 'Slab: 795140 kB' 'SReclaimable: 249156 kB' 'SUnreclaim: 545984 kB' 'KernelStack: 21920 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9601348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213588 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.624 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 26958824 kB' 'MemUsed: 5633260 kB' 'SwapCached: 16 kB' 'Active: 2899424 kB' 'Inactive: 180800 kB' 'Active(anon): 2682804 kB' 'Inactive(anon): 16 kB' 'Active(file): 216620 kB' 'Inactive(file): 180784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2875876 kB' 'Mapped: 128012 kB' 'AnonPages: 207460 kB' 'Shmem: 2478456 kB' 'KernelStack: 12776 kB' 'PageTables: 4964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134432 kB' 'Slab: 391436 kB' 'SReclaimable: 134432 kB' 'SUnreclaim: 257004 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703148 kB' 'MemFree: 16871872 kB' 'MemUsed: 10831276 kB' 'SwapCached: 0 kB' 'Active: 5724532 kB' 'Inactive: 2102836 kB' 'Active(anon): 5466024 kB' 'Inactive(anon): 78808 kB' 'Active(file): 258508 kB' 'Inactive(file): 2024028 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7427700 kB' 'Mapped: 62680 kB' 'AnonPages: 399804 kB' 'Shmem: 5145164 kB' 'KernelStack: 9272 kB' 'PageTables: 3956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114724 kB' 'Slab: 403704 kB' 'SReclaimable: 114724 kB' 'SUnreclaim: 288980 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.627 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:40.628 node0=512 expecting 512 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:40.628 node1=512 expecting 512 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:40.628 00:03:40.628 real 0m2.946s 00:03:40.628 user 0m1.008s 00:03:40.628 sys 0m1.928s 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:40.628 20:54:07 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:40.628 ************************************ 00:03:40.628 END TEST per_node_1G_alloc 00:03:40.628 ************************************ 00:03:40.886 20:54:07 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:40.886 20:54:07 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:40.886 20:54:07 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.887 20:54:07 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.887 20:54:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:40.887 ************************************ 00:03:40.887 START TEST even_2G_alloc 00:03:40.887 ************************************ 00:03:40.887 20:54:07 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:40.887 20:54:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:40.887 20:54:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:40.887 20:54:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:40.887 20:54:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:40.887 20:54:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:40.887 20:54:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:40.887 20:54:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:40.887 20:54:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:40.887 20:54:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:40.887 20:54:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:40.887 20:54:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:40.887 20:54:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:40.887 20:54:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:40.887 20:54:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:40.887 20:54:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:40.887 20:54:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:40.887 20:54:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:40.887 20:54:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:40.887 20:54:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:40.887 20:54:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:40.887 20:54:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:40.887 20:54:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:40.887 20:54:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:40.887 20:54:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:40.887 20:54:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:40.887 20:54:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:40.887 20:54:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.887 20:54:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:44.225 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:44.225 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:44.225 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:44.225 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:44.226 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:44.226 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:44.226 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:44.226 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:44.226 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:44.226 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:44.226 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:44.226 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:44.226 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:44.226 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:44.226 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:44.226 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:44.226 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43859080 kB' 'MemAvailable: 46166468 kB' 'Buffers: 11496 kB' 'Cached: 10292144 kB' 'SwapCached: 16 kB' 'Active: 8623972 kB' 'Inactive: 2283636 kB' 'Active(anon): 8148844 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607152 kB' 'Mapped: 189584 kB' 'Shmem: 7623700 kB' 'KReclaimable: 249156 kB' 'Slab: 794172 kB' 'SReclaimable: 249156 kB' 'SUnreclaim: 545016 kB' 'KernelStack: 22064 kB' 'PageTables: 8568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9596136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213780 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 20:54:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.226 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.226 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43857368 kB' 'MemAvailable: 46164756 kB' 'Buffers: 11496 kB' 'Cached: 10292148 kB' 'SwapCached: 16 kB' 'Active: 8623508 kB' 'Inactive: 2283636 kB' 'Active(anon): 8148380 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 606752 kB' 'Mapped: 189508 kB' 'Shmem: 7623704 kB' 'KReclaimable: 249156 kB' 'Slab: 794240 kB' 'SReclaimable: 249156 kB' 'SUnreclaim: 545084 kB' 'KernelStack: 22048 kB' 'PageTables: 8744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9596152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213716 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43860528 kB' 'MemAvailable: 46167916 kB' 'Buffers: 11496 kB' 'Cached: 10292168 kB' 'SwapCached: 16 kB' 'Active: 8623288 kB' 'Inactive: 2283636 kB' 'Active(anon): 8148160 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 606516 kB' 'Mapped: 189508 kB' 'Shmem: 7623724 kB' 'KReclaimable: 249156 kB' 'Slab: 794240 kB' 'SReclaimable: 249156 kB' 'SUnreclaim: 545084 kB' 'KernelStack: 22080 kB' 'PageTables: 8700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9594688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213748 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.229 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:44.231 nr_hugepages=1024 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.231 resv_hugepages=0 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.231 surplus_hugepages=0 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.231 anon_hugepages=0 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43861824 kB' 'MemAvailable: 46169212 kB' 'Buffers: 11496 kB' 'Cached: 10292188 kB' 'SwapCached: 16 kB' 'Active: 8623612 kB' 'Inactive: 2283636 kB' 'Active(anon): 8148484 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 606796 kB' 'Mapped: 189508 kB' 'Shmem: 7623744 kB' 'KReclaimable: 249156 kB' 'Slab: 794240 kB' 'SReclaimable: 249156 kB' 'SUnreclaim: 545084 kB' 'KernelStack: 22016 kB' 'PageTables: 8248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9594708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213700 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.231 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.232 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 26964156 kB' 'MemUsed: 5627928 kB' 'SwapCached: 16 kB' 'Active: 2898640 kB' 'Inactive: 180800 kB' 'Active(anon): 2682020 kB' 'Inactive(anon): 16 kB' 'Active(file): 216620 kB' 'Inactive(file): 180784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2875908 kB' 'Mapped: 127728 kB' 'AnonPages: 206656 kB' 'Shmem: 2478488 kB' 'KernelStack: 12840 kB' 'PageTables: 4852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134432 kB' 'Slab: 390488 kB' 'SReclaimable: 134432 kB' 'SUnreclaim: 256056 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.233 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703148 kB' 'MemFree: 16897092 kB' 'MemUsed: 10806056 kB' 'SwapCached: 0 kB' 'Active: 5725244 kB' 'Inactive: 2102836 kB' 'Active(anon): 5466736 kB' 'Inactive(anon): 78808 kB' 'Active(file): 258508 kB' 'Inactive(file): 2024028 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7427816 kB' 'Mapped: 61780 kB' 'AnonPages: 400328 kB' 'Shmem: 5145280 kB' 'KernelStack: 9240 kB' 'PageTables: 3804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114724 kB' 'Slab: 403752 kB' 'SReclaimable: 114724 kB' 'SUnreclaim: 289028 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.234 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:44.235 node0=512 expecting 512 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:44.235 node1=512 expecting 512 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:44.235 00:03:44.235 real 0m3.223s 00:03:44.235 user 0m1.167s 00:03:44.235 sys 0m1.995s 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.235 20:54:11 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:44.235 ************************************ 00:03:44.235 END TEST even_2G_alloc 00:03:44.235 ************************************ 00:03:44.236 20:54:11 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:44.236 20:54:11 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:44.236 20:54:11 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.236 20:54:11 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.236 20:54:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:44.236 ************************************ 00:03:44.236 START TEST odd_alloc 00:03:44.236 ************************************ 00:03:44.236 20:54:11 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:44.236 20:54:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:44.236 20:54:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:44.236 20:54:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:44.236 20:54:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:44.236 20:54:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:44.236 20:54:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:44.236 20:54:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:44.236 20:54:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.236 20:54:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:44.236 20:54:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:44.236 20:54:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.236 20:54:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.236 20:54:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:44.236 20:54:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:44.236 20:54:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.236 20:54:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:44.236 20:54:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:44.236 20:54:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:44.236 20:54:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.236 20:54:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:44.236 20:54:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:44.236 20:54:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:44.236 20:54:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.236 20:54:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:44.236 20:54:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:44.236 20:54:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:44.236 20:54:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.236 20:54:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:47.551 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:47.551 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:47.551 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:47.551 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:47.551 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:47.551 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:47.551 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:47.551 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:47.551 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:47.551 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:47.551 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:47.551 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:47.551 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:47.551 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:47.551 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:47.551 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:47.551 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43884680 kB' 'MemAvailable: 46192068 kB' 'Buffers: 11496 kB' 'Cached: 10292300 kB' 'SwapCached: 16 kB' 'Active: 8624856 kB' 'Inactive: 2283636 kB' 'Active(anon): 8149728 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607860 kB' 'Mapped: 189520 kB' 'Shmem: 7623856 kB' 'KReclaimable: 249156 kB' 'Slab: 795396 kB' 'SReclaimable: 249156 kB' 'SUnreclaim: 546240 kB' 'KernelStack: 22176 kB' 'PageTables: 9060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486620 kB' 'Committed_AS: 9595484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213764 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43887292 kB' 'MemAvailable: 46194680 kB' 'Buffers: 11496 kB' 'Cached: 10292304 kB' 'SwapCached: 16 kB' 'Active: 8624116 kB' 'Inactive: 2283636 kB' 'Active(anon): 8148988 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607188 kB' 'Mapped: 189520 kB' 'Shmem: 7623860 kB' 'KReclaimable: 249156 kB' 'Slab: 795404 kB' 'SReclaimable: 249156 kB' 'SUnreclaim: 546248 kB' 'KernelStack: 21904 kB' 'PageTables: 8376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486620 kB' 'Committed_AS: 9594380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213620 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43887300 kB' 'MemAvailable: 46194688 kB' 'Buffers: 11496 kB' 'Cached: 10292320 kB' 'SwapCached: 16 kB' 'Active: 8623768 kB' 'Inactive: 2283636 kB' 'Active(anon): 8148640 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 606860 kB' 'Mapped: 189504 kB' 'Shmem: 7623876 kB' 'KReclaimable: 249156 kB' 'Slab: 795308 kB' 'SReclaimable: 249156 kB' 'SUnreclaim: 546152 kB' 'KernelStack: 21968 kB' 'PageTables: 8396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486620 kB' 'Committed_AS: 9594400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213620 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.556 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:47.557 nr_hugepages=1025 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.557 resv_hugepages=0 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.557 surplus_hugepages=0 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.557 anon_hugepages=0 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43887988 kB' 'MemAvailable: 46195376 kB' 'Buffers: 11496 kB' 'Cached: 10292360 kB' 'SwapCached: 16 kB' 'Active: 8623428 kB' 'Inactive: 2283636 kB' 'Active(anon): 8148300 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 606452 kB' 'Mapped: 189504 kB' 'Shmem: 7623916 kB' 'KReclaimable: 249156 kB' 'Slab: 795308 kB' 'SReclaimable: 249156 kB' 'SUnreclaim: 546152 kB' 'KernelStack: 21952 kB' 'PageTables: 8336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486620 kB' 'Committed_AS: 9594420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213620 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.557 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.558 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.559 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 26966044 kB' 'MemUsed: 5626040 kB' 'SwapCached: 16 kB' 'Active: 2897476 kB' 'Inactive: 180800 kB' 'Active(anon): 2680856 kB' 'Inactive(anon): 16 kB' 'Active(file): 216620 kB' 'Inactive(file): 180784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2875920 kB' 'Mapped: 127712 kB' 'AnonPages: 205548 kB' 'Shmem: 2478500 kB' 'KernelStack: 12712 kB' 'PageTables: 4500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134432 kB' 'Slab: 391296 kB' 'SReclaimable: 134432 kB' 'SUnreclaim: 256864 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703148 kB' 'MemFree: 16921692 kB' 'MemUsed: 10781456 kB' 'SwapCached: 0 kB' 'Active: 5726392 kB' 'Inactive: 2102836 kB' 'Active(anon): 5467884 kB' 'Inactive(anon): 78808 kB' 'Active(file): 258508 kB' 'Inactive(file): 2024028 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7427976 kB' 'Mapped: 61792 kB' 'AnonPages: 401312 kB' 'Shmem: 5145440 kB' 'KernelStack: 9256 kB' 'PageTables: 3896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114724 kB' 'Slab: 404012 kB' 'SReclaimable: 114724 kB' 'SUnreclaim: 289288 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:47.563 node0=512 expecting 513 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:47.563 node1=513 expecting 512 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:47.563 00:03:47.563 real 0m3.337s 00:03:47.563 user 0m1.232s 00:03:47.563 sys 0m2.143s 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:47.563 20:54:14 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:47.563 ************************************ 00:03:47.563 END TEST odd_alloc 00:03:47.563 ************************************ 00:03:47.563 20:54:14 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:47.563 20:54:14 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:47.563 20:54:14 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:47.563 20:54:14 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.563 20:54:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:47.563 ************************************ 00:03:47.563 START TEST custom_alloc 00:03:47.563 ************************************ 00:03:47.563 20:54:14 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:47.563 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:47.563 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:47.563 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:47.563 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:47.563 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:47.563 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:47.563 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:47.563 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:47.563 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:47.563 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:47.563 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:47.563 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:47.563 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:47.563 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:47.563 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:47.563 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:47.563 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:47.563 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:47.563 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:47.563 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:47.563 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:47.563 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.564 20:54:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:50.982 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:50.982 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:50.982 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:50.982 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:50.982 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:50.982 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:50.982 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:50.982 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:50.982 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:50.982 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:50.982 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:50.982 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:50.982 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:50.982 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:50.982 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:50.982 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:50.982 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:50.982 20:54:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:50.982 20:54:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:50.982 20:54:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:50.982 20:54:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:50.982 20:54:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:50.982 20:54:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:50.982 20:54:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:50.982 20:54:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:50.982 20:54:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:50.982 20:54:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:50.982 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:50.982 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:50.982 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:50.982 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.982 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.982 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.982 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.982 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.982 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.982 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.982 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42847232 kB' 'MemAvailable: 45154620 kB' 'Buffers: 11496 kB' 'Cached: 10292472 kB' 'SwapCached: 16 kB' 'Active: 8624716 kB' 'Inactive: 2283636 kB' 'Active(anon): 8149588 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607680 kB' 'Mapped: 189580 kB' 'Shmem: 7624028 kB' 'KReclaimable: 249156 kB' 'Slab: 795140 kB' 'SReclaimable: 249156 kB' 'SUnreclaim: 545984 kB' 'KernelStack: 21936 kB' 'PageTables: 8360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963356 kB' 'Committed_AS: 9595052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213508 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.983 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.984 20:54:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42847864 kB' 'MemAvailable: 45155252 kB' 'Buffers: 11496 kB' 'Cached: 10292472 kB' 'SwapCached: 16 kB' 'Active: 8624504 kB' 'Inactive: 2283636 kB' 'Active(anon): 8149376 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607476 kB' 'Mapped: 189516 kB' 'Shmem: 7624028 kB' 'KReclaimable: 249156 kB' 'Slab: 795188 kB' 'SReclaimable: 249156 kB' 'SUnreclaim: 546032 kB' 'KernelStack: 21952 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963356 kB' 'Committed_AS: 9595068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213476 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.984 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42847864 kB' 'MemAvailable: 45155252 kB' 'Buffers: 11496 kB' 'Cached: 10292488 kB' 'SwapCached: 16 kB' 'Active: 8624128 kB' 'Inactive: 2283636 kB' 'Active(anon): 8149000 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607044 kB' 'Mapped: 189516 kB' 'Shmem: 7624044 kB' 'KReclaimable: 249156 kB' 'Slab: 795188 kB' 'SReclaimable: 249156 kB' 'SUnreclaim: 546032 kB' 'KernelStack: 21936 kB' 'PageTables: 8340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963356 kB' 'Committed_AS: 9595088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213476 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.985 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.986 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:50.987 nr_hugepages=1536 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:50.987 resv_hugepages=0 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:50.987 surplus_hugepages=0 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:50.987 anon_hugepages=0 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.987 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42848220 kB' 'MemAvailable: 45155608 kB' 'Buffers: 11496 kB' 'Cached: 10292512 kB' 'SwapCached: 16 kB' 'Active: 8624560 kB' 'Inactive: 2283636 kB' 'Active(anon): 8149432 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607456 kB' 'Mapped: 189516 kB' 'Shmem: 7624068 kB' 'KReclaimable: 249156 kB' 'Slab: 795188 kB' 'SReclaimable: 249156 kB' 'SUnreclaim: 546032 kB' 'KernelStack: 21952 kB' 'PageTables: 8396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963356 kB' 'Committed_AS: 9595112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213476 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.988 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.989 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 26977152 kB' 'MemUsed: 5614932 kB' 'SwapCached: 16 kB' 'Active: 2900016 kB' 'Inactive: 180800 kB' 'Active(anon): 2683396 kB' 'Inactive(anon): 16 kB' 'Active(file): 216620 kB' 'Inactive(file): 180784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2875940 kB' 'Mapped: 128216 kB' 'AnonPages: 208048 kB' 'Shmem: 2478520 kB' 'KernelStack: 12664 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134432 kB' 'Slab: 391396 kB' 'SReclaimable: 134432 kB' 'SUnreclaim: 256964 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.990 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.991 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703148 kB' 'MemFree: 15874040 kB' 'MemUsed: 11829108 kB' 'SwapCached: 0 kB' 'Active: 5726964 kB' 'Inactive: 2102836 kB' 'Active(anon): 5468456 kB' 'Inactive(anon): 78808 kB' 'Active(file): 258508 kB' 'Inactive(file): 2024028 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7428124 kB' 'Mapped: 61956 kB' 'AnonPages: 401800 kB' 'Shmem: 5145588 kB' 'KernelStack: 9256 kB' 'PageTables: 3840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114724 kB' 'Slab: 403792 kB' 'SReclaimable: 114724 kB' 'SUnreclaim: 289068 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.992 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.993 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:50.994 node0=512 expecting 512 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:50.994 node1=1024 expecting 1024 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:50.994 00:03:50.994 real 0m3.504s 00:03:50.994 user 0m1.348s 00:03:50.994 sys 0m2.208s 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:50.994 20:54:18 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:50.994 ************************************ 00:03:50.994 END TEST custom_alloc 00:03:50.994 ************************************ 00:03:50.994 20:54:18 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:50.994 20:54:18 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:50.994 20:54:18 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:50.994 20:54:18 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:50.994 20:54:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:51.254 ************************************ 00:03:51.254 START TEST no_shrink_alloc 00:03:51.254 ************************************ 00:03:51.254 20:54:18 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:51.254 20:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:51.254 20:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:51.254 20:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:51.254 20:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:51.254 20:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:51.254 20:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:51.254 20:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:51.254 20:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:51.254 20:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:51.254 20:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:51.254 20:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:51.254 20:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:51.254 20:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:51.254 20:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:51.254 20:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:51.254 20:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:51.254 20:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:51.254 20:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:51.254 20:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:51.254 20:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:51.254 20:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.254 20:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:54.548 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:54.548 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:54.548 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:54.548 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:54.548 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:54.548 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:54.548 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:54.548 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:54.548 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:54.548 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:54.548 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:54.548 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:54.548 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:54.548 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:54.548 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:54.548 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:54.548 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43933124 kB' 'MemAvailable: 46240512 kB' 'Buffers: 11496 kB' 'Cached: 10292628 kB' 'SwapCached: 16 kB' 'Active: 8625196 kB' 'Inactive: 2283636 kB' 'Active(anon): 8150068 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607908 kB' 'Mapped: 189592 kB' 'Shmem: 7624184 kB' 'KReclaimable: 249156 kB' 'Slab: 795456 kB' 'SReclaimable: 249156 kB' 'SUnreclaim: 546300 kB' 'KernelStack: 21936 kB' 'PageTables: 8300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9595732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213460 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.549 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43933676 kB' 'MemAvailable: 46241064 kB' 'Buffers: 11496 kB' 'Cached: 10292632 kB' 'SwapCached: 16 kB' 'Active: 8625104 kB' 'Inactive: 2283636 kB' 'Active(anon): 8149976 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607884 kB' 'Mapped: 189524 kB' 'Shmem: 7624188 kB' 'KReclaimable: 249156 kB' 'Slab: 795456 kB' 'SReclaimable: 249156 kB' 'SUnreclaim: 546300 kB' 'KernelStack: 21968 kB' 'PageTables: 8396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9595752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213444 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.550 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.551 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43933676 kB' 'MemAvailable: 46241064 kB' 'Buffers: 11496 kB' 'Cached: 10292632 kB' 'SwapCached: 16 kB' 'Active: 8625104 kB' 'Inactive: 2283636 kB' 'Active(anon): 8149976 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607884 kB' 'Mapped: 189524 kB' 'Shmem: 7624188 kB' 'KReclaimable: 249156 kB' 'Slab: 795456 kB' 'SReclaimable: 249156 kB' 'SUnreclaim: 546300 kB' 'KernelStack: 21968 kB' 'PageTables: 8396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9595772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213444 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:54.554 nr_hugepages=1024 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:54.554 resv_hugepages=0 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:54.554 surplus_hugepages=0 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:54.554 anon_hugepages=0 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43933260 kB' 'MemAvailable: 46240648 kB' 'Buffers: 11496 kB' 'Cached: 10292672 kB' 'SwapCached: 16 kB' 'Active: 8625160 kB' 'Inactive: 2283636 kB' 'Active(anon): 8150032 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607884 kB' 'Mapped: 189524 kB' 'Shmem: 7624228 kB' 'KReclaimable: 249156 kB' 'Slab: 795456 kB' 'SReclaimable: 249156 kB' 'SUnreclaim: 546300 kB' 'KernelStack: 21968 kB' 'PageTables: 8396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9595796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213444 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.554 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.556 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 25948604 kB' 'MemUsed: 6643480 kB' 'SwapCached: 16 kB' 'Active: 2896980 kB' 'Inactive: 180800 kB' 'Active(anon): 2680360 kB' 'Inactive(anon): 16 kB' 'Active(file): 216620 kB' 'Inactive(file): 180784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2875944 kB' 'Mapped: 128220 kB' 'AnonPages: 204936 kB' 'Shmem: 2478524 kB' 'KernelStack: 12664 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134432 kB' 'Slab: 391640 kB' 'SReclaimable: 134432 kB' 'SUnreclaim: 257208 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:54.558 node0=1024 expecting 1024 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.558 20:54:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:57.851 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:57.851 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:57.851 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:57.851 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:57.851 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:57.851 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:57.851 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:57.851 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:57.851 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:57.851 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:57.851 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:57.851 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:57.851 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:57.851 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:57.851 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:57.851 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:57.851 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:57.851 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43916824 kB' 'MemAvailable: 46224212 kB' 'Buffers: 11496 kB' 'Cached: 10292776 kB' 'SwapCached: 16 kB' 'Active: 8626056 kB' 'Inactive: 2283636 kB' 'Active(anon): 8150928 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 608100 kB' 'Mapped: 189616 kB' 'Shmem: 7624332 kB' 'KReclaimable: 249156 kB' 'Slab: 795032 kB' 'SReclaimable: 249156 kB' 'SUnreclaim: 545876 kB' 'KernelStack: 22000 kB' 'PageTables: 8380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9596596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213652 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.851 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.852 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43918384 kB' 'MemAvailable: 46225772 kB' 'Buffers: 11496 kB' 'Cached: 10292780 kB' 'SwapCached: 16 kB' 'Active: 8626192 kB' 'Inactive: 2283636 kB' 'Active(anon): 8151064 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 608824 kB' 'Mapped: 189536 kB' 'Shmem: 7624336 kB' 'KReclaimable: 249156 kB' 'Slab: 795032 kB' 'SReclaimable: 249156 kB' 'SUnreclaim: 545876 kB' 'KernelStack: 22032 kB' 'PageTables: 8492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9597736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213636 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.853 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:57.854 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43921648 kB' 'MemAvailable: 46229036 kB' 'Buffers: 11496 kB' 'Cached: 10292796 kB' 'SwapCached: 16 kB' 'Active: 8626096 kB' 'Inactive: 2283636 kB' 'Active(anon): 8150968 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 608704 kB' 'Mapped: 189536 kB' 'Shmem: 7624352 kB' 'KReclaimable: 249156 kB' 'Slab: 794904 kB' 'SReclaimable: 249156 kB' 'SUnreclaim: 545748 kB' 'KernelStack: 22032 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9599244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213652 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.855 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:57.856 nr_hugepages=1024 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:57.856 resv_hugepages=0 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:57.856 surplus_hugepages=0 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:57.856 anon_hugepages=0 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:57.856 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43922940 kB' 'MemAvailable: 46230328 kB' 'Buffers: 11496 kB' 'Cached: 10292820 kB' 'SwapCached: 16 kB' 'Active: 8626280 kB' 'Inactive: 2283636 kB' 'Active(anon): 8151152 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 608368 kB' 'Mapped: 189536 kB' 'Shmem: 7624376 kB' 'KReclaimable: 249156 kB' 'Slab: 794872 kB' 'SReclaimable: 249156 kB' 'SUnreclaim: 545716 kB' 'KernelStack: 22048 kB' 'PageTables: 8600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9597780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213620 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.857 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 25973584 kB' 'MemUsed: 6618500 kB' 'SwapCached: 16 kB' 'Active: 2897820 kB' 'Inactive: 180800 kB' 'Active(anon): 2681200 kB' 'Inactive(anon): 16 kB' 'Active(file): 216620 kB' 'Inactive(file): 180784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2875952 kB' 'Mapped: 127720 kB' 'AnonPages: 205780 kB' 'Shmem: 2478532 kB' 'KernelStack: 12728 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134432 kB' 'Slab: 391080 kB' 'SReclaimable: 134432 kB' 'SUnreclaim: 256648 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.858 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.859 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.860 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.860 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.860 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.860 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.860 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.860 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.860 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.860 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.860 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.860 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.860 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.860 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:57.860 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.860 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.860 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.860 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.860 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:57.860 node0=1024 expecting 1024 00:03:57.860 20:54:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:57.860 00:03:57.860 real 0m6.607s 00:03:57.860 user 0m2.499s 00:03:57.860 sys 0m4.209s 00:03:57.860 20:54:24 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.860 20:54:24 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:57.860 ************************************ 00:03:57.860 END TEST no_shrink_alloc 00:03:57.860 ************************************ 00:03:57.860 20:54:24 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:57.860 20:54:24 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:57.860 20:54:24 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:57.860 20:54:24 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:57.860 20:54:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:57.860 20:54:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:57.860 20:54:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:57.860 20:54:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:57.860 20:54:24 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:57.860 20:54:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:57.860 20:54:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:57.860 20:54:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:57.860 20:54:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:57.860 20:54:24 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:57.860 20:54:24 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:57.860 00:03:57.860 real 0m25.299s 00:03:57.860 user 0m8.764s 00:03:57.860 sys 0m15.305s 00:03:57.860 20:54:24 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.860 20:54:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:57.860 ************************************ 00:03:57.860 END TEST hugepages 00:03:57.860 ************************************ 00:03:57.860 20:54:24 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:57.860 20:54:24 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:57.860 20:54:24 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.860 20:54:24 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.860 20:54:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:57.860 ************************************ 00:03:57.860 START TEST driver 00:03:57.860 ************************************ 00:03:57.860 20:54:25 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:57.860 * Looking for test storage... 00:03:57.860 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:57.860 20:54:25 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:57.860 20:54:25 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:57.860 20:54:25 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:03.171 20:54:29 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:03.171 20:54:29 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.171 20:54:29 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.171 20:54:29 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:03.171 ************************************ 00:04:03.171 START TEST guess_driver 00:04:03.171 ************************************ 00:04:03.171 20:54:29 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:03.171 20:54:29 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:03.171 20:54:29 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:03.171 20:54:29 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:03.171 20:54:29 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:03.171 20:54:29 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:03.171 20:54:29 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:03.171 20:54:29 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:03.171 20:54:29 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:03.171 20:54:29 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:03.171 20:54:29 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:04:03.171 20:54:29 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:03.171 20:54:29 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:03.171 20:54:29 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:03.171 20:54:29 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:03.171 20:54:29 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:03.171 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:03.171 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:03.171 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:03.171 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:03.171 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:03.171 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:03.171 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:03.171 20:54:29 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:03.171 20:54:29 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:03.171 20:54:29 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:03.171 20:54:29 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:03.171 20:54:29 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:03.171 Looking for driver=vfio-pci 00:04:03.171 20:54:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.171 20:54:29 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:03.171 20:54:29 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.171 20:54:29 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:05.708 20:54:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:05.708 20:54:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:05.708 20:54:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.708 20:54:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:05.708 20:54:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:05.708 20:54:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.708 20:54:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:05.708 20:54:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:05.708 20:54:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.708 20:54:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:05.708 20:54:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:05.708 20:54:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.708 20:54:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:05.708 20:54:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:05.708 20:54:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.708 20:54:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:05.708 20:54:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:05.708 20:54:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.708 20:54:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:05.708 20:54:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:05.708 20:54:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.708 20:54:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:05.708 20:54:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:05.708 20:54:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.708 20:54:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:05.708 20:54:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:05.708 20:54:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.708 20:54:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:05.708 20:54:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:05.708 20:54:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.968 20:54:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:05.968 20:54:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:05.968 20:54:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.968 20:54:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:05.968 20:54:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:05.968 20:54:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.968 20:54:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:05.968 20:54:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:05.968 20:54:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.968 20:54:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:05.968 20:54:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:05.968 20:54:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.968 20:54:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:05.968 20:54:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:05.968 20:54:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.968 20:54:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:05.968 20:54:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:05.968 20:54:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.890 20:54:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.890 20:54:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.890 20:54:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.890 20:54:34 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:07.890 20:54:34 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:07.890 20:54:34 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:07.890 20:54:34 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:12.075 00:04:12.075 real 0m9.652s 00:04:12.075 user 0m2.545s 00:04:12.075 sys 0m4.797s 00:04:12.075 20:54:39 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.075 20:54:39 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:12.075 ************************************ 00:04:12.075 END TEST guess_driver 00:04:12.075 ************************************ 00:04:12.075 20:54:39 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:12.075 00:04:12.075 real 0m14.225s 00:04:12.075 user 0m3.747s 00:04:12.075 sys 0m7.328s 00:04:12.075 20:54:39 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.075 20:54:39 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:12.075 ************************************ 00:04:12.075 END TEST driver 00:04:12.075 ************************************ 00:04:12.075 20:54:39 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:12.075 20:54:39 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:12.075 20:54:39 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.075 20:54:39 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.075 20:54:39 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:12.075 ************************************ 00:04:12.075 START TEST devices 00:04:12.075 ************************************ 00:04:12.075 20:54:39 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:12.334 * Looking for test storage... 00:04:12.334 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:12.334 20:54:39 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:12.334 20:54:39 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:12.334 20:54:39 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:12.334 20:54:39 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:16.538 20:54:43 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:16.538 20:54:43 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:16.538 20:54:43 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:16.538 20:54:43 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:16.538 20:54:43 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:16.538 20:54:43 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:16.538 20:54:43 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:16.538 20:54:43 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:16.538 20:54:43 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:16.538 20:54:43 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:16.538 20:54:43 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:16.538 20:54:43 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:16.538 20:54:43 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:16.538 20:54:43 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:16.538 20:54:43 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:16.538 20:54:43 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:16.538 20:54:43 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:16.538 20:54:43 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:16.538 20:54:43 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:16.538 20:54:43 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:16.538 20:54:43 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:16.538 20:54:43 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:16.538 No valid GPT data, bailing 00:04:16.538 20:54:43 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:16.538 20:54:43 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:16.538 20:54:43 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:16.538 20:54:43 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:16.538 20:54:43 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:16.538 20:54:43 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:16.538 20:54:43 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:04:16.538 20:54:43 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:04:16.538 20:54:43 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:16.538 20:54:43 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:16.538 20:54:43 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:16.538 20:54:43 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:16.538 20:54:43 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:16.538 20:54:43 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:16.538 20:54:43 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.538 20:54:43 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:16.538 ************************************ 00:04:16.538 START TEST nvme_mount 00:04:16.538 ************************************ 00:04:16.539 20:54:43 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:16.539 20:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:16.539 20:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:16.539 20:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.539 20:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:16.539 20:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:16.539 20:54:43 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:16.539 20:54:43 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:16.539 20:54:43 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:16.539 20:54:43 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:16.539 20:54:43 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:16.539 20:54:43 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:16.539 20:54:43 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:16.539 20:54:43 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:16.539 20:54:43 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:16.539 20:54:43 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:16.539 20:54:43 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:16.539 20:54:43 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:16.539 20:54:43 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:16.539 20:54:43 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:17.106 Creating new GPT entries in memory. 00:04:17.106 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:17.106 other utilities. 00:04:17.106 20:54:44 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:17.106 20:54:44 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:17.106 20:54:44 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:17.106 20:54:44 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:17.106 20:54:44 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:18.044 Creating new GPT entries in memory. 00:04:18.044 The operation has completed successfully. 00:04:18.044 20:54:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:18.044 20:54:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:18.044 20:54:45 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 743104 00:04:18.044 20:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.044 20:54:45 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:18.044 20:54:45 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.044 20:54:45 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:18.044 20:54:45 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:18.044 20:54:45 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.044 20:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:18.044 20:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:18.044 20:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:18.044 20:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.044 20:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:18.044 20:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:18.044 20:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:18.044 20:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:18.044 20:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:18.044 20:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.044 20:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:18.044 20:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:18.044 20:54:45 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.044 20:54:45 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:21.335 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:21.335 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:21.628 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:21.628 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:21.628 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:21.628 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:21.628 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:21.628 20:54:48 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:21.628 20:54:48 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.628 20:54:48 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:21.628 20:54:48 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:21.628 20:54:48 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.628 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:21.628 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:21.628 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:21.628 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.628 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:21.628 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:21.628 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:21.628 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:21.628 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:21.628 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.628 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:21.628 20:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:21.628 20:54:48 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.628 20:54:48 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:24.920 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.920 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.921 20:54:51 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:28.213 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.213 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.213 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.213 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.213 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.213 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.213 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.213 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.213 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.213 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.213 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.213 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.213 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.213 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.213 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.213 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.213 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.213 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.213 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.213 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.213 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.213 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.213 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.213 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.213 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.213 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.213 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.213 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.213 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.213 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.213 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.213 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.213 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.214 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:28.214 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:28.214 20:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.214 20:54:55 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:28.214 20:54:55 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:28.214 20:54:55 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:28.214 20:54:55 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:28.214 20:54:55 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.214 20:54:55 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:28.214 20:54:55 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:28.214 20:54:55 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:28.214 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:28.214 00:04:28.214 real 0m11.973s 00:04:28.214 user 0m3.396s 00:04:28.214 sys 0m6.456s 00:04:28.214 20:54:55 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.214 20:54:55 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:28.214 ************************************ 00:04:28.214 END TEST nvme_mount 00:04:28.214 ************************************ 00:04:28.214 20:54:55 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:28.214 20:54:55 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:28.214 20:54:55 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.214 20:54:55 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.214 20:54:55 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:28.214 ************************************ 00:04:28.214 START TEST dm_mount 00:04:28.214 ************************************ 00:04:28.214 20:54:55 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:28.214 20:54:55 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:28.214 20:54:55 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:28.214 20:54:55 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:28.214 20:54:55 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:28.214 20:54:55 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:28.214 20:54:55 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:28.214 20:54:55 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:28.214 20:54:55 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:28.214 20:54:55 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:28.214 20:54:55 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:28.214 20:54:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:28.214 20:54:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:28.214 20:54:55 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:28.214 20:54:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:28.214 20:54:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:28.214 20:54:55 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:28.214 20:54:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:28.214 20:54:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:28.214 20:54:55 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:28.214 20:54:55 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:28.214 20:54:55 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:29.237 Creating new GPT entries in memory. 00:04:29.237 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:29.237 other utilities. 00:04:29.237 20:54:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:29.237 20:54:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:29.237 20:54:56 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:29.237 20:54:56 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:29.237 20:54:56 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:30.177 Creating new GPT entries in memory. 00:04:30.177 The operation has completed successfully. 00:04:30.177 20:54:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:30.177 20:54:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:30.177 20:54:57 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:30.177 20:54:57 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:30.177 20:54:57 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:31.116 The operation has completed successfully. 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 747388 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.116 20:54:58 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.410 20:55:01 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:36.946 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.205 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:37.205 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:37.205 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:37.205 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:37.205 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:37.205 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:37.205 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:37.205 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:37.205 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:37.205 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:37.205 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:37.205 20:55:04 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:37.205 00:04:37.205 real 0m9.186s 00:04:37.205 user 0m1.979s 00:04:37.205 sys 0m4.203s 00:04:37.205 20:55:04 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.205 20:55:04 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:37.205 ************************************ 00:04:37.205 END TEST dm_mount 00:04:37.205 ************************************ 00:04:37.205 20:55:04 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:37.205 20:55:04 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:37.205 20:55:04 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:37.205 20:55:04 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.205 20:55:04 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:37.205 20:55:04 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:37.205 20:55:04 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:37.205 20:55:04 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:37.463 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:37.463 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:37.463 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:37.463 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:37.463 20:55:04 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:37.463 20:55:04 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:37.463 20:55:04 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:37.463 20:55:04 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:37.463 20:55:04 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:37.463 20:55:04 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:37.463 20:55:04 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:37.463 00:04:37.463 real 0m25.382s 00:04:37.463 user 0m6.762s 00:04:37.463 sys 0m13.382s 00:04:37.463 20:55:04 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.463 20:55:04 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:37.463 ************************************ 00:04:37.463 END TEST devices 00:04:37.463 ************************************ 00:04:37.463 20:55:04 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:37.463 00:04:37.463 real 1m28.186s 00:04:37.463 user 0m26.496s 00:04:37.463 sys 0m50.034s 00:04:37.463 20:55:04 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.463 20:55:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:37.463 ************************************ 00:04:37.463 END TEST setup.sh 00:04:37.463 ************************************ 00:04:37.721 20:55:04 -- common/autotest_common.sh@1142 -- # return 0 00:04:37.721 20:55:04 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:04:41.001 Hugepages 00:04:41.001 node hugesize free / total 00:04:41.001 node0 1048576kB 0 / 0 00:04:41.001 node0 2048kB 2048 / 2048 00:04:41.001 node1 1048576kB 0 / 0 00:04:41.001 node1 2048kB 0 / 0 00:04:41.001 00:04:41.001 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:41.001 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:41.001 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:41.001 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:41.001 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:41.001 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:41.001 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:41.001 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:41.001 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:41.001 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:41.001 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:41.001 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:41.001 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:41.001 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:41.001 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:41.001 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:41.001 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:41.001 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:41.001 20:55:08 -- spdk/autotest.sh@130 -- # uname -s 00:04:41.001 20:55:08 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:41.001 20:55:08 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:41.001 20:55:08 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:44.276 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:44.276 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:44.276 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:44.276 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:44.276 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:44.276 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:44.277 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:44.277 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:44.277 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:44.277 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:44.277 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:44.277 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:44.277 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:44.277 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:44.277 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:44.277 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:46.176 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:46.176 20:55:13 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:47.110 20:55:14 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:47.110 20:55:14 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:47.110 20:55:14 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:47.110 20:55:14 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:47.110 20:55:14 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:47.110 20:55:14 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:47.110 20:55:14 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:47.110 20:55:14 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:47.110 20:55:14 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:47.110 20:55:14 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:47.110 20:55:14 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:04:47.110 20:55:14 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:50.391 Waiting for block devices as requested 00:04:50.391 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:50.391 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:50.391 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:50.391 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:50.391 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:50.391 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:50.391 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:50.391 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:50.648 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:50.648 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:50.648 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:50.905 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:50.905 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:50.905 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:51.163 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:51.163 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:51.163 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:51.422 20:55:18 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:51.422 20:55:18 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:51.422 20:55:18 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:51.422 20:55:18 -- common/autotest_common.sh@1502 -- # grep 0000:d8:00.0/nvme/nvme 00:04:51.422 20:55:18 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:51.422 20:55:18 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:51.422 20:55:18 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:51.422 20:55:18 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:51.422 20:55:18 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:51.422 20:55:18 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:51.422 20:55:18 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:51.422 20:55:18 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:51.422 20:55:18 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:51.422 20:55:18 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:04:51.422 20:55:18 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:51.422 20:55:18 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:51.422 20:55:18 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:51.422 20:55:18 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:51.422 20:55:18 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:51.422 20:55:18 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:51.422 20:55:18 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:51.422 20:55:18 -- common/autotest_common.sh@1557 -- # continue 00:04:51.422 20:55:18 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:51.422 20:55:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:51.422 20:55:18 -- common/autotest_common.sh@10 -- # set +x 00:04:51.422 20:55:18 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:51.422 20:55:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:51.422 20:55:18 -- common/autotest_common.sh@10 -- # set +x 00:04:51.422 20:55:18 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:54.703 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:54.703 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:54.703 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:54.703 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:54.703 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:54.703 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:54.703 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:54.703 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:54.703 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:54.703 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:54.703 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:54.703 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:54.703 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:54.703 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:54.703 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:54.703 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:56.079 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:56.079 20:55:23 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:56.079 20:55:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:56.079 20:55:23 -- common/autotest_common.sh@10 -- # set +x 00:04:56.079 20:55:23 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:56.079 20:55:23 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:56.079 20:55:23 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:56.079 20:55:23 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:56.079 20:55:23 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:56.079 20:55:23 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:56.079 20:55:23 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:56.079 20:55:23 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:56.079 20:55:23 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:56.079 20:55:23 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:56.079 20:55:23 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:56.079 20:55:23 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:56.079 20:55:23 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:04:56.079 20:55:23 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:56.079 20:55:23 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:04:56.079 20:55:23 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:56.079 20:55:23 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:56.079 20:55:23 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:56.079 20:55:23 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:d8:00.0 00:04:56.079 20:55:23 -- common/autotest_common.sh@1592 -- # [[ -z 0000:d8:00.0 ]] 00:04:56.079 20:55:23 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=756794 00:04:56.079 20:55:23 -- common/autotest_common.sh@1598 -- # waitforlisten 756794 00:04:56.079 20:55:23 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:56.079 20:55:23 -- common/autotest_common.sh@829 -- # '[' -z 756794 ']' 00:04:56.079 20:55:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.079 20:55:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:56.079 20:55:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.079 20:55:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:56.079 20:55:23 -- common/autotest_common.sh@10 -- # set +x 00:04:56.336 [2024-07-15 20:55:23.392269] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:04:56.336 [2024-07-15 20:55:23.392354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid756794 ] 00:04:56.336 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.336 [2024-07-15 20:55:23.463988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.336 [2024-07-15 20:55:23.542994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.267 20:55:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:57.267 20:55:24 -- common/autotest_common.sh@862 -- # return 0 00:04:57.267 20:55:24 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:57.267 20:55:24 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:57.267 20:55:24 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:00.549 nvme0n1 00:05:00.549 20:55:27 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:00.549 [2024-07-15 20:55:27.362135] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:00.549 request: 00:05:00.549 { 00:05:00.549 "nvme_ctrlr_name": "nvme0", 00:05:00.549 "password": "test", 00:05:00.549 "method": "bdev_nvme_opal_revert", 00:05:00.549 "req_id": 1 00:05:00.549 } 00:05:00.549 Got JSON-RPC error response 00:05:00.549 response: 00:05:00.549 { 00:05:00.549 "code": -32602, 00:05:00.549 "message": "Invalid parameters" 00:05:00.549 } 00:05:00.549 20:55:27 -- common/autotest_common.sh@1604 -- # true 00:05:00.549 20:55:27 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:00.549 20:55:27 -- common/autotest_common.sh@1608 -- # killprocess 756794 00:05:00.549 20:55:27 -- common/autotest_common.sh@948 -- # '[' -z 756794 ']' 00:05:00.549 20:55:27 -- common/autotest_common.sh@952 -- # kill -0 756794 00:05:00.549 20:55:27 -- common/autotest_common.sh@953 -- # uname 00:05:00.549 20:55:27 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:00.549 20:55:27 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 756794 00:05:00.549 20:55:27 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:00.549 20:55:27 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:00.549 20:55:27 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 756794' 00:05:00.549 killing process with pid 756794 00:05:00.549 20:55:27 -- common/autotest_common.sh@967 -- # kill 756794 00:05:00.549 20:55:27 -- common/autotest_common.sh@972 -- # wait 756794 00:05:02.456 20:55:29 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:02.456 20:55:29 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:02.456 20:55:29 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:02.456 20:55:29 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:02.456 20:55:29 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:02.456 20:55:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:02.456 20:55:29 -- common/autotest_common.sh@10 -- # set +x 00:05:02.456 20:55:29 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:02.456 20:55:29 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:02.456 20:55:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.456 20:55:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.456 20:55:29 -- common/autotest_common.sh@10 -- # set +x 00:05:02.456 ************************************ 00:05:02.456 START TEST env 00:05:02.456 ************************************ 00:05:02.456 20:55:29 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:02.714 * Looking for test storage... 00:05:02.714 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:05:02.714 20:55:29 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:02.714 20:55:29 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.714 20:55:29 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.714 20:55:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:02.714 ************************************ 00:05:02.714 START TEST env_memory 00:05:02.714 ************************************ 00:05:02.714 20:55:29 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:02.714 00:05:02.714 00:05:02.714 CUnit - A unit testing framework for C - Version 2.1-3 00:05:02.714 http://cunit.sourceforge.net/ 00:05:02.714 00:05:02.714 00:05:02.714 Suite: memory 00:05:02.714 Test: alloc and free memory map ...[2024-07-15 20:55:29.857884] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:02.714 passed 00:05:02.714 Test: mem map translation ...[2024-07-15 20:55:29.871055] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:02.714 [2024-07-15 20:55:29.871071] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:02.714 [2024-07-15 20:55:29.871103] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:02.714 [2024-07-15 20:55:29.871111] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:02.714 passed 00:05:02.714 Test: mem map registration ...[2024-07-15 20:55:29.891452] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:02.714 [2024-07-15 20:55:29.891468] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:02.714 passed 00:05:02.714 Test: mem map adjacent registrations ...passed 00:05:02.714 00:05:02.714 Run Summary: Type Total Ran Passed Failed Inactive 00:05:02.714 suites 1 1 n/a 0 0 00:05:02.714 tests 4 4 4 0 0 00:05:02.714 asserts 152 152 152 0 n/a 00:05:02.714 00:05:02.714 Elapsed time = 0.082 seconds 00:05:02.714 00:05:02.714 real 0m0.091s 00:05:02.714 user 0m0.084s 00:05:02.714 sys 0m0.006s 00:05:02.714 20:55:29 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.714 20:55:29 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:02.714 ************************************ 00:05:02.714 END TEST env_memory 00:05:02.714 ************************************ 00:05:02.714 20:55:29 env -- common/autotest_common.sh@1142 -- # return 0 00:05:02.714 20:55:29 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:02.714 20:55:29 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.714 20:55:29 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.715 20:55:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:02.715 ************************************ 00:05:02.715 START TEST env_vtophys 00:05:02.715 ************************************ 00:05:02.715 20:55:29 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:03.043 EAL: lib.eal log level changed from notice to debug 00:05:03.043 EAL: Detected lcore 0 as core 0 on socket 0 00:05:03.043 EAL: Detected lcore 1 as core 1 on socket 0 00:05:03.043 EAL: Detected lcore 2 as core 2 on socket 0 00:05:03.043 EAL: Detected lcore 3 as core 3 on socket 0 00:05:03.043 EAL: Detected lcore 4 as core 4 on socket 0 00:05:03.043 EAL: Detected lcore 5 as core 5 on socket 0 00:05:03.043 EAL: Detected lcore 6 as core 6 on socket 0 00:05:03.043 EAL: Detected lcore 7 as core 8 on socket 0 00:05:03.043 EAL: Detected lcore 8 as core 9 on socket 0 00:05:03.043 EAL: Detected lcore 9 as core 10 on socket 0 00:05:03.043 EAL: Detected lcore 10 as core 11 on socket 0 00:05:03.043 EAL: Detected lcore 11 as core 12 on socket 0 00:05:03.043 EAL: Detected lcore 12 as core 13 on socket 0 00:05:03.043 EAL: Detected lcore 13 as core 14 on socket 0 00:05:03.043 EAL: Detected lcore 14 as core 16 on socket 0 00:05:03.043 EAL: Detected lcore 15 as core 17 on socket 0 00:05:03.043 EAL: Detected lcore 16 as core 18 on socket 0 00:05:03.043 EAL: Detected lcore 17 as core 19 on socket 0 00:05:03.043 EAL: Detected lcore 18 as core 20 on socket 0 00:05:03.043 EAL: Detected lcore 19 as core 21 on socket 0 00:05:03.043 EAL: Detected lcore 20 as core 22 on socket 0 00:05:03.043 EAL: Detected lcore 21 as core 24 on socket 0 00:05:03.043 EAL: Detected lcore 22 as core 25 on socket 0 00:05:03.043 EAL: Detected lcore 23 as core 26 on socket 0 00:05:03.043 EAL: Detected lcore 24 as core 27 on socket 0 00:05:03.043 EAL: Detected lcore 25 as core 28 on socket 0 00:05:03.043 EAL: Detected lcore 26 as core 29 on socket 0 00:05:03.043 EAL: Detected lcore 27 as core 30 on socket 0 00:05:03.043 EAL: Detected lcore 28 as core 0 on socket 1 00:05:03.043 EAL: Detected lcore 29 as core 1 on socket 1 00:05:03.043 EAL: Detected lcore 30 as core 2 on socket 1 00:05:03.044 EAL: Detected lcore 31 as core 3 on socket 1 00:05:03.044 EAL: Detected lcore 32 as core 4 on socket 1 00:05:03.044 EAL: Detected lcore 33 as core 5 on socket 1 00:05:03.044 EAL: Detected lcore 34 as core 6 on socket 1 00:05:03.044 EAL: Detected lcore 35 as core 8 on socket 1 00:05:03.044 EAL: Detected lcore 36 as core 9 on socket 1 00:05:03.044 EAL: Detected lcore 37 as core 10 on socket 1 00:05:03.044 EAL: Detected lcore 38 as core 11 on socket 1 00:05:03.044 EAL: Detected lcore 39 as core 12 on socket 1 00:05:03.044 EAL: Detected lcore 40 as core 13 on socket 1 00:05:03.044 EAL: Detected lcore 41 as core 14 on socket 1 00:05:03.044 EAL: Detected lcore 42 as core 16 on socket 1 00:05:03.044 EAL: Detected lcore 43 as core 17 on socket 1 00:05:03.044 EAL: Detected lcore 44 as core 18 on socket 1 00:05:03.044 EAL: Detected lcore 45 as core 19 on socket 1 00:05:03.044 EAL: Detected lcore 46 as core 20 on socket 1 00:05:03.044 EAL: Detected lcore 47 as core 21 on socket 1 00:05:03.044 EAL: Detected lcore 48 as core 22 on socket 1 00:05:03.044 EAL: Detected lcore 49 as core 24 on socket 1 00:05:03.044 EAL: Detected lcore 50 as core 25 on socket 1 00:05:03.044 EAL: Detected lcore 51 as core 26 on socket 1 00:05:03.044 EAL: Detected lcore 52 as core 27 on socket 1 00:05:03.044 EAL: Detected lcore 53 as core 28 on socket 1 00:05:03.044 EAL: Detected lcore 54 as core 29 on socket 1 00:05:03.044 EAL: Detected lcore 55 as core 30 on socket 1 00:05:03.044 EAL: Detected lcore 56 as core 0 on socket 0 00:05:03.044 EAL: Detected lcore 57 as core 1 on socket 0 00:05:03.044 EAL: Detected lcore 58 as core 2 on socket 0 00:05:03.044 EAL: Detected lcore 59 as core 3 on socket 0 00:05:03.044 EAL: Detected lcore 60 as core 4 on socket 0 00:05:03.044 EAL: Detected lcore 61 as core 5 on socket 0 00:05:03.044 EAL: Detected lcore 62 as core 6 on socket 0 00:05:03.044 EAL: Detected lcore 63 as core 8 on socket 0 00:05:03.044 EAL: Detected lcore 64 as core 9 on socket 0 00:05:03.044 EAL: Detected lcore 65 as core 10 on socket 0 00:05:03.044 EAL: Detected lcore 66 as core 11 on socket 0 00:05:03.044 EAL: Detected lcore 67 as core 12 on socket 0 00:05:03.044 EAL: Detected lcore 68 as core 13 on socket 0 00:05:03.044 EAL: Detected lcore 69 as core 14 on socket 0 00:05:03.044 EAL: Detected lcore 70 as core 16 on socket 0 00:05:03.044 EAL: Detected lcore 71 as core 17 on socket 0 00:05:03.044 EAL: Detected lcore 72 as core 18 on socket 0 00:05:03.044 EAL: Detected lcore 73 as core 19 on socket 0 00:05:03.044 EAL: Detected lcore 74 as core 20 on socket 0 00:05:03.044 EAL: Detected lcore 75 as core 21 on socket 0 00:05:03.044 EAL: Detected lcore 76 as core 22 on socket 0 00:05:03.044 EAL: Detected lcore 77 as core 24 on socket 0 00:05:03.044 EAL: Detected lcore 78 as core 25 on socket 0 00:05:03.044 EAL: Detected lcore 79 as core 26 on socket 0 00:05:03.044 EAL: Detected lcore 80 as core 27 on socket 0 00:05:03.044 EAL: Detected lcore 81 as core 28 on socket 0 00:05:03.044 EAL: Detected lcore 82 as core 29 on socket 0 00:05:03.044 EAL: Detected lcore 83 as core 30 on socket 0 00:05:03.044 EAL: Detected lcore 84 as core 0 on socket 1 00:05:03.044 EAL: Detected lcore 85 as core 1 on socket 1 00:05:03.044 EAL: Detected lcore 86 as core 2 on socket 1 00:05:03.044 EAL: Detected lcore 87 as core 3 on socket 1 00:05:03.044 EAL: Detected lcore 88 as core 4 on socket 1 00:05:03.044 EAL: Detected lcore 89 as core 5 on socket 1 00:05:03.044 EAL: Detected lcore 90 as core 6 on socket 1 00:05:03.044 EAL: Detected lcore 91 as core 8 on socket 1 00:05:03.044 EAL: Detected lcore 92 as core 9 on socket 1 00:05:03.044 EAL: Detected lcore 93 as core 10 on socket 1 00:05:03.044 EAL: Detected lcore 94 as core 11 on socket 1 00:05:03.044 EAL: Detected lcore 95 as core 12 on socket 1 00:05:03.044 EAL: Detected lcore 96 as core 13 on socket 1 00:05:03.044 EAL: Detected lcore 97 as core 14 on socket 1 00:05:03.044 EAL: Detected lcore 98 as core 16 on socket 1 00:05:03.044 EAL: Detected lcore 99 as core 17 on socket 1 00:05:03.044 EAL: Detected lcore 100 as core 18 on socket 1 00:05:03.044 EAL: Detected lcore 101 as core 19 on socket 1 00:05:03.044 EAL: Detected lcore 102 as core 20 on socket 1 00:05:03.044 EAL: Detected lcore 103 as core 21 on socket 1 00:05:03.044 EAL: Detected lcore 104 as core 22 on socket 1 00:05:03.044 EAL: Detected lcore 105 as core 24 on socket 1 00:05:03.044 EAL: Detected lcore 106 as core 25 on socket 1 00:05:03.044 EAL: Detected lcore 107 as core 26 on socket 1 00:05:03.044 EAL: Detected lcore 108 as core 27 on socket 1 00:05:03.044 EAL: Detected lcore 109 as core 28 on socket 1 00:05:03.044 EAL: Detected lcore 110 as core 29 on socket 1 00:05:03.044 EAL: Detected lcore 111 as core 30 on socket 1 00:05:03.044 EAL: Maximum logical cores by configuration: 128 00:05:03.044 EAL: Detected CPU lcores: 112 00:05:03.044 EAL: Detected NUMA nodes: 2 00:05:03.044 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:03.044 EAL: Checking presence of .so 'librte_eal.so.24' 00:05:03.044 EAL: Checking presence of .so 'librte_eal.so' 00:05:03.044 EAL: Detected static linkage of DPDK 00:05:03.044 EAL: No shared files mode enabled, IPC will be disabled 00:05:03.044 EAL: Bus pci wants IOVA as 'DC' 00:05:03.044 EAL: Buses did not request a specific IOVA mode. 00:05:03.044 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:03.044 EAL: Selected IOVA mode 'VA' 00:05:03.044 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.044 EAL: Probing VFIO support... 00:05:03.044 EAL: IOMMU type 1 (Type 1) is supported 00:05:03.044 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:03.044 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:03.044 EAL: VFIO support initialized 00:05:03.044 EAL: Ask a virtual area of 0x2e000 bytes 00:05:03.044 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:03.044 EAL: Setting up physically contiguous memory... 00:05:03.044 EAL: Setting maximum number of open files to 524288 00:05:03.044 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:03.044 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:03.044 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:03.044 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.044 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:03.044 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:03.044 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.044 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:03.044 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:03.044 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.044 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:03.044 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:03.044 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.044 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:03.044 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:03.044 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.044 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:03.044 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:03.044 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.044 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:03.044 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:03.044 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.044 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:03.044 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:03.044 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.044 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:03.044 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:03.044 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:03.044 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.044 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:03.044 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:03.044 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.044 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:03.044 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:03.044 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.044 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:03.044 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:03.044 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.044 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:03.044 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:03.044 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.044 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:03.044 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:03.044 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.044 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:03.044 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:03.044 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.044 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:03.044 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:03.044 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.044 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:03.044 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:03.044 EAL: Hugepages will be freed exactly as allocated. 00:05:03.044 EAL: No shared files mode enabled, IPC is disabled 00:05:03.044 EAL: No shared files mode enabled, IPC is disabled 00:05:03.044 EAL: TSC frequency is ~2500000 KHz 00:05:03.044 EAL: Main lcore 0 is ready (tid=7fda88b93a00;cpuset=[0]) 00:05:03.044 EAL: Trying to obtain current memory policy. 00:05:03.044 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.044 EAL: Restoring previous memory policy: 0 00:05:03.044 EAL: request: mp_malloc_sync 00:05:03.044 EAL: No shared files mode enabled, IPC is disabled 00:05:03.044 EAL: Heap on socket 0 was expanded by 2MB 00:05:03.044 EAL: No shared files mode enabled, IPC is disabled 00:05:03.044 EAL: Mem event callback 'spdk:(nil)' registered 00:05:03.044 00:05:03.044 00:05:03.044 CUnit - A unit testing framework for C - Version 2.1-3 00:05:03.044 http://cunit.sourceforge.net/ 00:05:03.044 00:05:03.044 00:05:03.044 Suite: components_suite 00:05:03.044 Test: vtophys_malloc_test ...passed 00:05:03.044 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:03.044 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.044 EAL: Restoring previous memory policy: 4 00:05:03.044 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.044 EAL: request: mp_malloc_sync 00:05:03.044 EAL: No shared files mode enabled, IPC is disabled 00:05:03.044 EAL: Heap on socket 0 was expanded by 4MB 00:05:03.044 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.044 EAL: request: mp_malloc_sync 00:05:03.044 EAL: No shared files mode enabled, IPC is disabled 00:05:03.044 EAL: Heap on socket 0 was shrunk by 4MB 00:05:03.044 EAL: Trying to obtain current memory policy. 00:05:03.044 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.044 EAL: Restoring previous memory policy: 4 00:05:03.044 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.044 EAL: request: mp_malloc_sync 00:05:03.044 EAL: No shared files mode enabled, IPC is disabled 00:05:03.044 EAL: Heap on socket 0 was expanded by 6MB 00:05:03.044 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.044 EAL: request: mp_malloc_sync 00:05:03.044 EAL: No shared files mode enabled, IPC is disabled 00:05:03.045 EAL: Heap on socket 0 was shrunk by 6MB 00:05:03.045 EAL: Trying to obtain current memory policy. 00:05:03.045 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.045 EAL: Restoring previous memory policy: 4 00:05:03.045 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.045 EAL: request: mp_malloc_sync 00:05:03.045 EAL: No shared files mode enabled, IPC is disabled 00:05:03.045 EAL: Heap on socket 0 was expanded by 10MB 00:05:03.045 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.045 EAL: request: mp_malloc_sync 00:05:03.045 EAL: No shared files mode enabled, IPC is disabled 00:05:03.045 EAL: Heap on socket 0 was shrunk by 10MB 00:05:03.045 EAL: Trying to obtain current memory policy. 00:05:03.045 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.045 EAL: Restoring previous memory policy: 4 00:05:03.045 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.045 EAL: request: mp_malloc_sync 00:05:03.045 EAL: No shared files mode enabled, IPC is disabled 00:05:03.045 EAL: Heap on socket 0 was expanded by 18MB 00:05:03.045 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.045 EAL: request: mp_malloc_sync 00:05:03.045 EAL: No shared files mode enabled, IPC is disabled 00:05:03.045 EAL: Heap on socket 0 was shrunk by 18MB 00:05:03.045 EAL: Trying to obtain current memory policy. 00:05:03.045 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.045 EAL: Restoring previous memory policy: 4 00:05:03.045 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.045 EAL: request: mp_malloc_sync 00:05:03.045 EAL: No shared files mode enabled, IPC is disabled 00:05:03.045 EAL: Heap on socket 0 was expanded by 34MB 00:05:03.045 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.045 EAL: request: mp_malloc_sync 00:05:03.045 EAL: No shared files mode enabled, IPC is disabled 00:05:03.045 EAL: Heap on socket 0 was shrunk by 34MB 00:05:03.045 EAL: Trying to obtain current memory policy. 00:05:03.045 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.045 EAL: Restoring previous memory policy: 4 00:05:03.045 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.045 EAL: request: mp_malloc_sync 00:05:03.045 EAL: No shared files mode enabled, IPC is disabled 00:05:03.045 EAL: Heap on socket 0 was expanded by 66MB 00:05:03.045 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.045 EAL: request: mp_malloc_sync 00:05:03.045 EAL: No shared files mode enabled, IPC is disabled 00:05:03.045 EAL: Heap on socket 0 was shrunk by 66MB 00:05:03.045 EAL: Trying to obtain current memory policy. 00:05:03.045 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.045 EAL: Restoring previous memory policy: 4 00:05:03.045 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.045 EAL: request: mp_malloc_sync 00:05:03.045 EAL: No shared files mode enabled, IPC is disabled 00:05:03.045 EAL: Heap on socket 0 was expanded by 130MB 00:05:03.045 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.045 EAL: request: mp_malloc_sync 00:05:03.045 EAL: No shared files mode enabled, IPC is disabled 00:05:03.045 EAL: Heap on socket 0 was shrunk by 130MB 00:05:03.045 EAL: Trying to obtain current memory policy. 00:05:03.045 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.045 EAL: Restoring previous memory policy: 4 00:05:03.045 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.045 EAL: request: mp_malloc_sync 00:05:03.045 EAL: No shared files mode enabled, IPC is disabled 00:05:03.045 EAL: Heap on socket 0 was expanded by 258MB 00:05:03.045 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.306 EAL: request: mp_malloc_sync 00:05:03.306 EAL: No shared files mode enabled, IPC is disabled 00:05:03.306 EAL: Heap on socket 0 was shrunk by 258MB 00:05:03.306 EAL: Trying to obtain current memory policy. 00:05:03.306 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.306 EAL: Restoring previous memory policy: 4 00:05:03.306 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.306 EAL: request: mp_malloc_sync 00:05:03.306 EAL: No shared files mode enabled, IPC is disabled 00:05:03.306 EAL: Heap on socket 0 was expanded by 514MB 00:05:03.306 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.306 EAL: request: mp_malloc_sync 00:05:03.306 EAL: No shared files mode enabled, IPC is disabled 00:05:03.306 EAL: Heap on socket 0 was shrunk by 514MB 00:05:03.306 EAL: Trying to obtain current memory policy. 00:05:03.306 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.565 EAL: Restoring previous memory policy: 4 00:05:03.565 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.565 EAL: request: mp_malloc_sync 00:05:03.565 EAL: No shared files mode enabled, IPC is disabled 00:05:03.565 EAL: Heap on socket 0 was expanded by 1026MB 00:05:03.824 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.824 EAL: request: mp_malloc_sync 00:05:03.824 EAL: No shared files mode enabled, IPC is disabled 00:05:03.824 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:03.824 passed 00:05:03.824 00:05:03.824 Run Summary: Type Total Ran Passed Failed Inactive 00:05:03.824 suites 1 1 n/a 0 0 00:05:03.824 tests 2 2 2 0 0 00:05:03.824 asserts 497 497 497 0 n/a 00:05:03.824 00:05:03.824 Elapsed time = 0.960 seconds 00:05:03.824 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.824 EAL: request: mp_malloc_sync 00:05:03.824 EAL: No shared files mode enabled, IPC is disabled 00:05:03.824 EAL: Heap on socket 0 was shrunk by 2MB 00:05:03.824 EAL: No shared files mode enabled, IPC is disabled 00:05:03.824 EAL: No shared files mode enabled, IPC is disabled 00:05:03.824 EAL: No shared files mode enabled, IPC is disabled 00:05:03.824 00:05:03.824 real 0m1.088s 00:05:03.824 user 0m0.635s 00:05:03.824 sys 0m0.424s 00:05:03.824 20:55:31 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.824 20:55:31 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:03.824 ************************************ 00:05:03.824 END TEST env_vtophys 00:05:03.824 ************************************ 00:05:04.083 20:55:31 env -- common/autotest_common.sh@1142 -- # return 0 00:05:04.083 20:55:31 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:04.083 20:55:31 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.083 20:55:31 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.083 20:55:31 env -- common/autotest_common.sh@10 -- # set +x 00:05:04.083 ************************************ 00:05:04.083 START TEST env_pci 00:05:04.083 ************************************ 00:05:04.083 20:55:31 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:04.083 00:05:04.083 00:05:04.083 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.083 http://cunit.sourceforge.net/ 00:05:04.083 00:05:04.083 00:05:04.083 Suite: pci 00:05:04.083 Test: pci_hook ...[2024-07-15 20:55:31.184396] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1041:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 758271 has claimed it 00:05:04.083 EAL: Cannot find device (10000:00:01.0) 00:05:04.083 EAL: Failed to attach device on primary process 00:05:04.083 passed 00:05:04.083 00:05:04.083 Run Summary: Type Total Ran Passed Failed Inactive 00:05:04.083 suites 1 1 n/a 0 0 00:05:04.083 tests 1 1 1 0 0 00:05:04.083 asserts 25 25 25 0 n/a 00:05:04.083 00:05:04.083 Elapsed time = 0.034 seconds 00:05:04.083 00:05:04.083 real 0m0.055s 00:05:04.083 user 0m0.008s 00:05:04.083 sys 0m0.047s 00:05:04.083 20:55:31 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.083 20:55:31 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:04.083 ************************************ 00:05:04.083 END TEST env_pci 00:05:04.083 ************************************ 00:05:04.083 20:55:31 env -- common/autotest_common.sh@1142 -- # return 0 00:05:04.083 20:55:31 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:04.083 20:55:31 env -- env/env.sh@15 -- # uname 00:05:04.083 20:55:31 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:04.083 20:55:31 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:04.083 20:55:31 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:04.083 20:55:31 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:04.084 20:55:31 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.084 20:55:31 env -- common/autotest_common.sh@10 -- # set +x 00:05:04.084 ************************************ 00:05:04.084 START TEST env_dpdk_post_init 00:05:04.084 ************************************ 00:05:04.084 20:55:31 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:04.084 EAL: Detected CPU lcores: 112 00:05:04.084 EAL: Detected NUMA nodes: 2 00:05:04.084 EAL: Detected static linkage of DPDK 00:05:04.084 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:04.084 EAL: Selected IOVA mode 'VA' 00:05:04.084 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.084 EAL: VFIO support initialized 00:05:04.084 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:04.343 EAL: Using IOMMU type 1 (Type 1) 00:05:04.911 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:09.104 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:09.104 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001000000 00:05:09.104 Starting DPDK initialization... 00:05:09.104 Starting SPDK post initialization... 00:05:09.104 SPDK NVMe probe 00:05:09.104 Attaching to 0000:d8:00.0 00:05:09.104 Attached to 0000:d8:00.0 00:05:09.104 Cleaning up... 00:05:09.104 00:05:09.104 real 0m4.734s 00:05:09.104 user 0m3.560s 00:05:09.104 sys 0m0.417s 00:05:09.104 20:55:36 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.104 20:55:36 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:09.104 ************************************ 00:05:09.104 END TEST env_dpdk_post_init 00:05:09.104 ************************************ 00:05:09.104 20:55:36 env -- common/autotest_common.sh@1142 -- # return 0 00:05:09.104 20:55:36 env -- env/env.sh@26 -- # uname 00:05:09.104 20:55:36 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:09.104 20:55:36 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:09.104 20:55:36 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.104 20:55:36 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.104 20:55:36 env -- common/autotest_common.sh@10 -- # set +x 00:05:09.104 ************************************ 00:05:09.104 START TEST env_mem_callbacks 00:05:09.104 ************************************ 00:05:09.104 20:55:36 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:09.104 EAL: Detected CPU lcores: 112 00:05:09.104 EAL: Detected NUMA nodes: 2 00:05:09.104 EAL: Detected static linkage of DPDK 00:05:09.104 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:09.104 EAL: Selected IOVA mode 'VA' 00:05:09.104 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.104 EAL: VFIO support initialized 00:05:09.104 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:09.104 00:05:09.104 00:05:09.104 CUnit - A unit testing framework for C - Version 2.1-3 00:05:09.104 http://cunit.sourceforge.net/ 00:05:09.104 00:05:09.104 00:05:09.104 Suite: memory 00:05:09.104 Test: test ... 00:05:09.104 register 0x200000200000 2097152 00:05:09.104 malloc 3145728 00:05:09.104 register 0x200000400000 4194304 00:05:09.104 buf 0x200000500000 len 3145728 PASSED 00:05:09.104 malloc 64 00:05:09.104 buf 0x2000004fff40 len 64 PASSED 00:05:09.104 malloc 4194304 00:05:09.104 register 0x200000800000 6291456 00:05:09.104 buf 0x200000a00000 len 4194304 PASSED 00:05:09.104 free 0x200000500000 3145728 00:05:09.104 free 0x2000004fff40 64 00:05:09.104 unregister 0x200000400000 4194304 PASSED 00:05:09.104 free 0x200000a00000 4194304 00:05:09.104 unregister 0x200000800000 6291456 PASSED 00:05:09.104 malloc 8388608 00:05:09.104 register 0x200000400000 10485760 00:05:09.104 buf 0x200000600000 len 8388608 PASSED 00:05:09.104 free 0x200000600000 8388608 00:05:09.104 unregister 0x200000400000 10485760 PASSED 00:05:09.104 passed 00:05:09.104 00:05:09.104 Run Summary: Type Total Ran Passed Failed Inactive 00:05:09.104 suites 1 1 n/a 0 0 00:05:09.104 tests 1 1 1 0 0 00:05:09.104 asserts 15 15 15 0 n/a 00:05:09.104 00:05:09.104 Elapsed time = 0.005 seconds 00:05:09.104 00:05:09.104 real 0m0.063s 00:05:09.104 user 0m0.017s 00:05:09.104 sys 0m0.046s 00:05:09.104 20:55:36 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.104 20:55:36 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:09.104 ************************************ 00:05:09.104 END TEST env_mem_callbacks 00:05:09.104 ************************************ 00:05:09.104 20:55:36 env -- common/autotest_common.sh@1142 -- # return 0 00:05:09.104 00:05:09.104 real 0m6.552s 00:05:09.104 user 0m4.506s 00:05:09.104 sys 0m1.297s 00:05:09.104 20:55:36 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.104 20:55:36 env -- common/autotest_common.sh@10 -- # set +x 00:05:09.104 ************************************ 00:05:09.104 END TEST env 00:05:09.104 ************************************ 00:05:09.104 20:55:36 -- common/autotest_common.sh@1142 -- # return 0 00:05:09.104 20:55:36 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:09.104 20:55:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.104 20:55:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.104 20:55:36 -- common/autotest_common.sh@10 -- # set +x 00:05:09.104 ************************************ 00:05:09.104 START TEST rpc 00:05:09.104 ************************************ 00:05:09.104 20:55:36 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:09.370 * Looking for test storage... 00:05:09.370 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:09.370 20:55:36 rpc -- rpc/rpc.sh@65 -- # spdk_pid=759251 00:05:09.370 20:55:36 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.370 20:55:36 rpc -- rpc/rpc.sh@67 -- # waitforlisten 759251 00:05:09.370 20:55:36 rpc -- common/autotest_common.sh@829 -- # '[' -z 759251 ']' 00:05:09.370 20:55:36 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.370 20:55:36 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.370 20:55:36 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.370 20:55:36 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.370 20:55:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.370 20:55:36 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:09.370 [2024-07-15 20:55:36.433017] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:09.370 [2024-07-15 20:55:36.433083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid759251 ] 00:05:09.370 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.370 [2024-07-15 20:55:36.500730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.370 [2024-07-15 20:55:36.579252] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:09.370 [2024-07-15 20:55:36.579288] app.c: 607:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 759251' to capture a snapshot of events at runtime. 00:05:09.370 [2024-07-15 20:55:36.579297] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:09.370 [2024-07-15 20:55:36.579306] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:09.370 [2024-07-15 20:55:36.579313] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid759251 for offline analysis/debug. 00:05:09.370 [2024-07-15 20:55:36.579338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.310 20:55:37 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.310 20:55:37 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:10.310 20:55:37 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:10.310 20:55:37 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:10.310 20:55:37 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:10.310 20:55:37 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:10.310 20:55:37 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.310 20:55:37 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.310 20:55:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.310 ************************************ 00:05:10.310 START TEST rpc_integrity 00:05:10.310 ************************************ 00:05:10.310 20:55:37 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:10.310 20:55:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:10.310 20:55:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.310 20:55:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.310 20:55:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.310 20:55:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:10.310 20:55:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:10.310 20:55:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:10.310 20:55:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:10.310 20:55:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.310 20:55:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.310 20:55:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.310 20:55:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:10.310 20:55:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:10.310 20:55:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.310 20:55:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.310 20:55:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.310 20:55:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:10.310 { 00:05:10.310 "name": "Malloc0", 00:05:10.310 "aliases": [ 00:05:10.310 "2d74390c-abf8-4a54-93c8-a466efb943ef" 00:05:10.310 ], 00:05:10.310 "product_name": "Malloc disk", 00:05:10.310 "block_size": 512, 00:05:10.310 "num_blocks": 16384, 00:05:10.310 "uuid": "2d74390c-abf8-4a54-93c8-a466efb943ef", 00:05:10.310 "assigned_rate_limits": { 00:05:10.311 "rw_ios_per_sec": 0, 00:05:10.311 "rw_mbytes_per_sec": 0, 00:05:10.311 "r_mbytes_per_sec": 0, 00:05:10.311 "w_mbytes_per_sec": 0 00:05:10.311 }, 00:05:10.311 "claimed": false, 00:05:10.311 "zoned": false, 00:05:10.311 "supported_io_types": { 00:05:10.311 "read": true, 00:05:10.311 "write": true, 00:05:10.311 "unmap": true, 00:05:10.311 "flush": true, 00:05:10.311 "reset": true, 00:05:10.311 "nvme_admin": false, 00:05:10.311 "nvme_io": false, 00:05:10.311 "nvme_io_md": false, 00:05:10.311 "write_zeroes": true, 00:05:10.311 "zcopy": true, 00:05:10.311 "get_zone_info": false, 00:05:10.311 "zone_management": false, 00:05:10.311 "zone_append": false, 00:05:10.311 "compare": false, 00:05:10.311 "compare_and_write": false, 00:05:10.311 "abort": true, 00:05:10.311 "seek_hole": false, 00:05:10.311 "seek_data": false, 00:05:10.311 "copy": true, 00:05:10.311 "nvme_iov_md": false 00:05:10.311 }, 00:05:10.311 "memory_domains": [ 00:05:10.311 { 00:05:10.311 "dma_device_id": "system", 00:05:10.311 "dma_device_type": 1 00:05:10.311 }, 00:05:10.311 { 00:05:10.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.311 "dma_device_type": 2 00:05:10.311 } 00:05:10.311 ], 00:05:10.311 "driver_specific": {} 00:05:10.311 } 00:05:10.311 ]' 00:05:10.311 20:55:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:10.311 20:55:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:10.311 20:55:37 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:10.311 20:55:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.311 20:55:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.311 [2024-07-15 20:55:37.395643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:10.311 [2024-07-15 20:55:37.395675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:10.311 [2024-07-15 20:55:37.395693] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x46c7460 00:05:10.311 [2024-07-15 20:55:37.395703] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:10.311 [2024-07-15 20:55:37.396548] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:10.311 [2024-07-15 20:55:37.396570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:10.311 Passthru0 00:05:10.311 20:55:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.311 20:55:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:10.311 20:55:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.311 20:55:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.311 20:55:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.311 20:55:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:10.311 { 00:05:10.311 "name": "Malloc0", 00:05:10.311 "aliases": [ 00:05:10.311 "2d74390c-abf8-4a54-93c8-a466efb943ef" 00:05:10.311 ], 00:05:10.311 "product_name": "Malloc disk", 00:05:10.311 "block_size": 512, 00:05:10.311 "num_blocks": 16384, 00:05:10.311 "uuid": "2d74390c-abf8-4a54-93c8-a466efb943ef", 00:05:10.311 "assigned_rate_limits": { 00:05:10.311 "rw_ios_per_sec": 0, 00:05:10.311 "rw_mbytes_per_sec": 0, 00:05:10.311 "r_mbytes_per_sec": 0, 00:05:10.311 "w_mbytes_per_sec": 0 00:05:10.311 }, 00:05:10.311 "claimed": true, 00:05:10.311 "claim_type": "exclusive_write", 00:05:10.311 "zoned": false, 00:05:10.311 "supported_io_types": { 00:05:10.311 "read": true, 00:05:10.311 "write": true, 00:05:10.311 "unmap": true, 00:05:10.311 "flush": true, 00:05:10.311 "reset": true, 00:05:10.311 "nvme_admin": false, 00:05:10.311 "nvme_io": false, 00:05:10.311 "nvme_io_md": false, 00:05:10.311 "write_zeroes": true, 00:05:10.311 "zcopy": true, 00:05:10.311 "get_zone_info": false, 00:05:10.311 "zone_management": false, 00:05:10.311 "zone_append": false, 00:05:10.311 "compare": false, 00:05:10.311 "compare_and_write": false, 00:05:10.311 "abort": true, 00:05:10.311 "seek_hole": false, 00:05:10.311 "seek_data": false, 00:05:10.311 "copy": true, 00:05:10.311 "nvme_iov_md": false 00:05:10.311 }, 00:05:10.311 "memory_domains": [ 00:05:10.311 { 00:05:10.311 "dma_device_id": "system", 00:05:10.311 "dma_device_type": 1 00:05:10.311 }, 00:05:10.311 { 00:05:10.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.311 "dma_device_type": 2 00:05:10.311 } 00:05:10.311 ], 00:05:10.311 "driver_specific": {} 00:05:10.311 }, 00:05:10.311 { 00:05:10.311 "name": "Passthru0", 00:05:10.311 "aliases": [ 00:05:10.311 "601dacf0-1af7-5776-99c5-19790d207da8" 00:05:10.311 ], 00:05:10.311 "product_name": "passthru", 00:05:10.311 "block_size": 512, 00:05:10.311 "num_blocks": 16384, 00:05:10.311 "uuid": "601dacf0-1af7-5776-99c5-19790d207da8", 00:05:10.311 "assigned_rate_limits": { 00:05:10.311 "rw_ios_per_sec": 0, 00:05:10.311 "rw_mbytes_per_sec": 0, 00:05:10.311 "r_mbytes_per_sec": 0, 00:05:10.311 "w_mbytes_per_sec": 0 00:05:10.311 }, 00:05:10.311 "claimed": false, 00:05:10.311 "zoned": false, 00:05:10.311 "supported_io_types": { 00:05:10.311 "read": true, 00:05:10.311 "write": true, 00:05:10.311 "unmap": true, 00:05:10.311 "flush": true, 00:05:10.311 "reset": true, 00:05:10.311 "nvme_admin": false, 00:05:10.311 "nvme_io": false, 00:05:10.311 "nvme_io_md": false, 00:05:10.311 "write_zeroes": true, 00:05:10.311 "zcopy": true, 00:05:10.311 "get_zone_info": false, 00:05:10.311 "zone_management": false, 00:05:10.311 "zone_append": false, 00:05:10.311 "compare": false, 00:05:10.311 "compare_and_write": false, 00:05:10.311 "abort": true, 00:05:10.311 "seek_hole": false, 00:05:10.311 "seek_data": false, 00:05:10.311 "copy": true, 00:05:10.311 "nvme_iov_md": false 00:05:10.311 }, 00:05:10.311 "memory_domains": [ 00:05:10.311 { 00:05:10.311 "dma_device_id": "system", 00:05:10.311 "dma_device_type": 1 00:05:10.311 }, 00:05:10.311 { 00:05:10.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.311 "dma_device_type": 2 00:05:10.311 } 00:05:10.311 ], 00:05:10.311 "driver_specific": { 00:05:10.311 "passthru": { 00:05:10.311 "name": "Passthru0", 00:05:10.311 "base_bdev_name": "Malloc0" 00:05:10.311 } 00:05:10.311 } 00:05:10.311 } 00:05:10.311 ]' 00:05:10.311 20:55:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:10.311 20:55:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:10.311 20:55:37 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:10.311 20:55:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.311 20:55:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.311 20:55:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.311 20:55:37 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:10.311 20:55:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.311 20:55:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.311 20:55:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.311 20:55:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:10.311 20:55:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.311 20:55:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.311 20:55:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.311 20:55:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:10.311 20:55:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:10.311 20:55:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:10.311 00:05:10.311 real 0m0.274s 00:05:10.311 user 0m0.164s 00:05:10.311 sys 0m0.048s 00:05:10.311 20:55:37 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.311 20:55:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.311 ************************************ 00:05:10.311 END TEST rpc_integrity 00:05:10.311 ************************************ 00:05:10.311 20:55:37 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:10.311 20:55:37 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:10.311 20:55:37 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.311 20:55:37 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.311 20:55:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.571 ************************************ 00:05:10.571 START TEST rpc_plugins 00:05:10.571 ************************************ 00:05:10.571 20:55:37 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:10.571 20:55:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:10.571 20:55:37 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.571 20:55:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:10.571 20:55:37 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.571 20:55:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:10.571 20:55:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:10.571 20:55:37 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.571 20:55:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:10.571 20:55:37 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.571 20:55:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:10.571 { 00:05:10.571 "name": "Malloc1", 00:05:10.571 "aliases": [ 00:05:10.571 "ff0062af-1a42-48a0-b060-95dc62e54d17" 00:05:10.571 ], 00:05:10.571 "product_name": "Malloc disk", 00:05:10.571 "block_size": 4096, 00:05:10.571 "num_blocks": 256, 00:05:10.571 "uuid": "ff0062af-1a42-48a0-b060-95dc62e54d17", 00:05:10.571 "assigned_rate_limits": { 00:05:10.571 "rw_ios_per_sec": 0, 00:05:10.571 "rw_mbytes_per_sec": 0, 00:05:10.571 "r_mbytes_per_sec": 0, 00:05:10.571 "w_mbytes_per_sec": 0 00:05:10.571 }, 00:05:10.571 "claimed": false, 00:05:10.571 "zoned": false, 00:05:10.571 "supported_io_types": { 00:05:10.571 "read": true, 00:05:10.571 "write": true, 00:05:10.571 "unmap": true, 00:05:10.571 "flush": true, 00:05:10.571 "reset": true, 00:05:10.571 "nvme_admin": false, 00:05:10.571 "nvme_io": false, 00:05:10.571 "nvme_io_md": false, 00:05:10.571 "write_zeroes": true, 00:05:10.571 "zcopy": true, 00:05:10.571 "get_zone_info": false, 00:05:10.571 "zone_management": false, 00:05:10.571 "zone_append": false, 00:05:10.571 "compare": false, 00:05:10.571 "compare_and_write": false, 00:05:10.571 "abort": true, 00:05:10.571 "seek_hole": false, 00:05:10.571 "seek_data": false, 00:05:10.571 "copy": true, 00:05:10.571 "nvme_iov_md": false 00:05:10.571 }, 00:05:10.571 "memory_domains": [ 00:05:10.571 { 00:05:10.571 "dma_device_id": "system", 00:05:10.571 "dma_device_type": 1 00:05:10.571 }, 00:05:10.571 { 00:05:10.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.571 "dma_device_type": 2 00:05:10.571 } 00:05:10.571 ], 00:05:10.571 "driver_specific": {} 00:05:10.571 } 00:05:10.571 ]' 00:05:10.571 20:55:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:10.571 20:55:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:10.572 20:55:37 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:10.572 20:55:37 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.572 20:55:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:10.572 20:55:37 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.572 20:55:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:10.572 20:55:37 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.572 20:55:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:10.572 20:55:37 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.572 20:55:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:10.572 20:55:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:10.572 20:55:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:10.572 00:05:10.572 real 0m0.134s 00:05:10.572 user 0m0.084s 00:05:10.572 sys 0m0.016s 00:05:10.572 20:55:37 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.572 20:55:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:10.572 ************************************ 00:05:10.572 END TEST rpc_plugins 00:05:10.572 ************************************ 00:05:10.572 20:55:37 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:10.572 20:55:37 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:10.572 20:55:37 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.572 20:55:37 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.572 20:55:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.572 ************************************ 00:05:10.572 START TEST rpc_trace_cmd_test 00:05:10.572 ************************************ 00:05:10.572 20:55:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:10.572 20:55:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:10.572 20:55:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:10.572 20:55:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.572 20:55:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:10.572 20:55:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.572 20:55:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:10.572 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid759251", 00:05:10.572 "tpoint_group_mask": "0x8", 00:05:10.572 "iscsi_conn": { 00:05:10.572 "mask": "0x2", 00:05:10.572 "tpoint_mask": "0x0" 00:05:10.572 }, 00:05:10.572 "scsi": { 00:05:10.572 "mask": "0x4", 00:05:10.572 "tpoint_mask": "0x0" 00:05:10.572 }, 00:05:10.572 "bdev": { 00:05:10.572 "mask": "0x8", 00:05:10.572 "tpoint_mask": "0xffffffffffffffff" 00:05:10.572 }, 00:05:10.572 "nvmf_rdma": { 00:05:10.572 "mask": "0x10", 00:05:10.572 "tpoint_mask": "0x0" 00:05:10.572 }, 00:05:10.572 "nvmf_tcp": { 00:05:10.572 "mask": "0x20", 00:05:10.572 "tpoint_mask": "0x0" 00:05:10.572 }, 00:05:10.572 "ftl": { 00:05:10.572 "mask": "0x40", 00:05:10.572 "tpoint_mask": "0x0" 00:05:10.572 }, 00:05:10.572 "blobfs": { 00:05:10.572 "mask": "0x80", 00:05:10.572 "tpoint_mask": "0x0" 00:05:10.572 }, 00:05:10.572 "dsa": { 00:05:10.572 "mask": "0x200", 00:05:10.572 "tpoint_mask": "0x0" 00:05:10.572 }, 00:05:10.572 "thread": { 00:05:10.572 "mask": "0x400", 00:05:10.572 "tpoint_mask": "0x0" 00:05:10.572 }, 00:05:10.572 "nvme_pcie": { 00:05:10.572 "mask": "0x800", 00:05:10.572 "tpoint_mask": "0x0" 00:05:10.572 }, 00:05:10.572 "iaa": { 00:05:10.572 "mask": "0x1000", 00:05:10.572 "tpoint_mask": "0x0" 00:05:10.572 }, 00:05:10.572 "nvme_tcp": { 00:05:10.572 "mask": "0x2000", 00:05:10.572 "tpoint_mask": "0x0" 00:05:10.572 }, 00:05:10.572 "bdev_nvme": { 00:05:10.572 "mask": "0x4000", 00:05:10.572 "tpoint_mask": "0x0" 00:05:10.572 }, 00:05:10.572 "sock": { 00:05:10.572 "mask": "0x8000", 00:05:10.572 "tpoint_mask": "0x0" 00:05:10.572 } 00:05:10.572 }' 00:05:10.572 20:55:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:10.832 20:55:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:10.832 20:55:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:10.832 20:55:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:10.832 20:55:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:10.832 20:55:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:10.832 20:55:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:10.832 20:55:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:10.832 20:55:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:10.832 20:55:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:10.832 00:05:10.832 real 0m0.201s 00:05:10.832 user 0m0.166s 00:05:10.832 sys 0m0.027s 00:05:10.832 20:55:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.832 20:55:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:10.832 ************************************ 00:05:10.832 END TEST rpc_trace_cmd_test 00:05:10.832 ************************************ 00:05:10.832 20:55:38 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:10.832 20:55:38 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:10.832 20:55:38 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:10.832 20:55:38 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:10.832 20:55:38 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.832 20:55:38 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.832 20:55:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.832 ************************************ 00:05:10.832 START TEST rpc_daemon_integrity 00:05:10.832 ************************************ 00:05:10.832 20:55:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:10.832 20:55:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:10.832 20:55:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.832 20:55:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.832 20:55:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.832 20:55:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:10.832 20:55:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:11.092 20:55:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:11.092 20:55:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:11.092 20:55:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.092 20:55:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.092 20:55:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.092 20:55:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:11.092 20:55:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:11.092 20:55:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.092 20:55:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.092 20:55:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.092 20:55:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:11.092 { 00:05:11.092 "name": "Malloc2", 00:05:11.092 "aliases": [ 00:05:11.092 "86dc188a-c015-4b78-a226-58c113f06e04" 00:05:11.092 ], 00:05:11.092 "product_name": "Malloc disk", 00:05:11.092 "block_size": 512, 00:05:11.092 "num_blocks": 16384, 00:05:11.092 "uuid": "86dc188a-c015-4b78-a226-58c113f06e04", 00:05:11.092 "assigned_rate_limits": { 00:05:11.092 "rw_ios_per_sec": 0, 00:05:11.092 "rw_mbytes_per_sec": 0, 00:05:11.092 "r_mbytes_per_sec": 0, 00:05:11.092 "w_mbytes_per_sec": 0 00:05:11.092 }, 00:05:11.092 "claimed": false, 00:05:11.092 "zoned": false, 00:05:11.092 "supported_io_types": { 00:05:11.092 "read": true, 00:05:11.092 "write": true, 00:05:11.092 "unmap": true, 00:05:11.092 "flush": true, 00:05:11.092 "reset": true, 00:05:11.092 "nvme_admin": false, 00:05:11.092 "nvme_io": false, 00:05:11.092 "nvme_io_md": false, 00:05:11.092 "write_zeroes": true, 00:05:11.092 "zcopy": true, 00:05:11.092 "get_zone_info": false, 00:05:11.092 "zone_management": false, 00:05:11.092 "zone_append": false, 00:05:11.092 "compare": false, 00:05:11.092 "compare_and_write": false, 00:05:11.092 "abort": true, 00:05:11.092 "seek_hole": false, 00:05:11.092 "seek_data": false, 00:05:11.092 "copy": true, 00:05:11.092 "nvme_iov_md": false 00:05:11.092 }, 00:05:11.092 "memory_domains": [ 00:05:11.092 { 00:05:11.092 "dma_device_id": "system", 00:05:11.092 "dma_device_type": 1 00:05:11.092 }, 00:05:11.092 { 00:05:11.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.093 "dma_device_type": 2 00:05:11.093 } 00:05:11.093 ], 00:05:11.093 "driver_specific": {} 00:05:11.093 } 00:05:11.093 ]' 00:05:11.093 20:55:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:11.093 20:55:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:11.093 20:55:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:11.093 20:55:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.093 20:55:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.093 [2024-07-15 20:55:38.241938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:11.093 [2024-07-15 20:55:38.241966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:11.093 [2024-07-15 20:55:38.241982] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x46bde50 00:05:11.093 [2024-07-15 20:55:38.241991] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:11.093 [2024-07-15 20:55:38.242686] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:11.093 [2024-07-15 20:55:38.242706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:11.093 Passthru0 00:05:11.093 20:55:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.093 20:55:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:11.093 20:55:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.093 20:55:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.093 20:55:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.093 20:55:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:11.093 { 00:05:11.093 "name": "Malloc2", 00:05:11.093 "aliases": [ 00:05:11.093 "86dc188a-c015-4b78-a226-58c113f06e04" 00:05:11.093 ], 00:05:11.093 "product_name": "Malloc disk", 00:05:11.093 "block_size": 512, 00:05:11.093 "num_blocks": 16384, 00:05:11.093 "uuid": "86dc188a-c015-4b78-a226-58c113f06e04", 00:05:11.093 "assigned_rate_limits": { 00:05:11.093 "rw_ios_per_sec": 0, 00:05:11.093 "rw_mbytes_per_sec": 0, 00:05:11.093 "r_mbytes_per_sec": 0, 00:05:11.093 "w_mbytes_per_sec": 0 00:05:11.093 }, 00:05:11.093 "claimed": true, 00:05:11.093 "claim_type": "exclusive_write", 00:05:11.093 "zoned": false, 00:05:11.093 "supported_io_types": { 00:05:11.093 "read": true, 00:05:11.093 "write": true, 00:05:11.093 "unmap": true, 00:05:11.093 "flush": true, 00:05:11.093 "reset": true, 00:05:11.093 "nvme_admin": false, 00:05:11.093 "nvme_io": false, 00:05:11.093 "nvme_io_md": false, 00:05:11.093 "write_zeroes": true, 00:05:11.093 "zcopy": true, 00:05:11.093 "get_zone_info": false, 00:05:11.093 "zone_management": false, 00:05:11.093 "zone_append": false, 00:05:11.093 "compare": false, 00:05:11.093 "compare_and_write": false, 00:05:11.093 "abort": true, 00:05:11.093 "seek_hole": false, 00:05:11.093 "seek_data": false, 00:05:11.093 "copy": true, 00:05:11.093 "nvme_iov_md": false 00:05:11.093 }, 00:05:11.093 "memory_domains": [ 00:05:11.093 { 00:05:11.093 "dma_device_id": "system", 00:05:11.093 "dma_device_type": 1 00:05:11.093 }, 00:05:11.093 { 00:05:11.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.093 "dma_device_type": 2 00:05:11.093 } 00:05:11.093 ], 00:05:11.093 "driver_specific": {} 00:05:11.093 }, 00:05:11.093 { 00:05:11.093 "name": "Passthru0", 00:05:11.093 "aliases": [ 00:05:11.093 "5ef38814-e1a8-51f7-9f30-08fe6c07fe8c" 00:05:11.093 ], 00:05:11.093 "product_name": "passthru", 00:05:11.093 "block_size": 512, 00:05:11.093 "num_blocks": 16384, 00:05:11.093 "uuid": "5ef38814-e1a8-51f7-9f30-08fe6c07fe8c", 00:05:11.093 "assigned_rate_limits": { 00:05:11.093 "rw_ios_per_sec": 0, 00:05:11.093 "rw_mbytes_per_sec": 0, 00:05:11.093 "r_mbytes_per_sec": 0, 00:05:11.093 "w_mbytes_per_sec": 0 00:05:11.093 }, 00:05:11.093 "claimed": false, 00:05:11.093 "zoned": false, 00:05:11.093 "supported_io_types": { 00:05:11.093 "read": true, 00:05:11.093 "write": true, 00:05:11.093 "unmap": true, 00:05:11.093 "flush": true, 00:05:11.093 "reset": true, 00:05:11.093 "nvme_admin": false, 00:05:11.093 "nvme_io": false, 00:05:11.093 "nvme_io_md": false, 00:05:11.093 "write_zeroes": true, 00:05:11.093 "zcopy": true, 00:05:11.093 "get_zone_info": false, 00:05:11.093 "zone_management": false, 00:05:11.093 "zone_append": false, 00:05:11.093 "compare": false, 00:05:11.093 "compare_and_write": false, 00:05:11.093 "abort": true, 00:05:11.093 "seek_hole": false, 00:05:11.093 "seek_data": false, 00:05:11.093 "copy": true, 00:05:11.093 "nvme_iov_md": false 00:05:11.093 }, 00:05:11.093 "memory_domains": [ 00:05:11.093 { 00:05:11.093 "dma_device_id": "system", 00:05:11.093 "dma_device_type": 1 00:05:11.093 }, 00:05:11.093 { 00:05:11.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.093 "dma_device_type": 2 00:05:11.093 } 00:05:11.093 ], 00:05:11.093 "driver_specific": { 00:05:11.093 "passthru": { 00:05:11.093 "name": "Passthru0", 00:05:11.093 "base_bdev_name": "Malloc2" 00:05:11.093 } 00:05:11.093 } 00:05:11.093 } 00:05:11.093 ]' 00:05:11.093 20:55:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:11.093 20:55:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:11.093 20:55:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:11.093 20:55:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.093 20:55:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.093 20:55:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.093 20:55:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:11.093 20:55:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.093 20:55:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.093 20:55:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.093 20:55:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:11.093 20:55:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.093 20:55:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.093 20:55:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.093 20:55:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:11.093 20:55:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:11.093 20:55:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:11.093 00:05:11.093 real 0m0.267s 00:05:11.093 user 0m0.160s 00:05:11.093 sys 0m0.036s 00:05:11.093 20:55:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.093 20:55:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.093 ************************************ 00:05:11.093 END TEST rpc_daemon_integrity 00:05:11.093 ************************************ 00:05:11.354 20:55:38 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:11.354 20:55:38 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:11.354 20:55:38 rpc -- rpc/rpc.sh@84 -- # killprocess 759251 00:05:11.354 20:55:38 rpc -- common/autotest_common.sh@948 -- # '[' -z 759251 ']' 00:05:11.354 20:55:38 rpc -- common/autotest_common.sh@952 -- # kill -0 759251 00:05:11.354 20:55:38 rpc -- common/autotest_common.sh@953 -- # uname 00:05:11.354 20:55:38 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:11.354 20:55:38 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 759251 00:05:11.354 20:55:38 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:11.354 20:55:38 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:11.354 20:55:38 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 759251' 00:05:11.354 killing process with pid 759251 00:05:11.354 20:55:38 rpc -- common/autotest_common.sh@967 -- # kill 759251 00:05:11.354 20:55:38 rpc -- common/autotest_common.sh@972 -- # wait 759251 00:05:11.613 00:05:11.613 real 0m2.441s 00:05:11.613 user 0m3.100s 00:05:11.613 sys 0m0.719s 00:05:11.613 20:55:38 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.613 20:55:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.613 ************************************ 00:05:11.613 END TEST rpc 00:05:11.613 ************************************ 00:05:11.613 20:55:38 -- common/autotest_common.sh@1142 -- # return 0 00:05:11.613 20:55:38 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:11.613 20:55:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.613 20:55:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.613 20:55:38 -- common/autotest_common.sh@10 -- # set +x 00:05:11.613 ************************************ 00:05:11.613 START TEST skip_rpc 00:05:11.613 ************************************ 00:05:11.613 20:55:38 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:11.873 * Looking for test storage... 00:05:11.873 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:11.873 20:55:38 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:11.873 20:55:38 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:11.873 20:55:38 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:11.873 20:55:38 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.873 20:55:38 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.873 20:55:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.873 ************************************ 00:05:11.873 START TEST skip_rpc 00:05:11.873 ************************************ 00:05:11.873 20:55:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:11.873 20:55:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=759949 00:05:11.873 20:55:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.873 20:55:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:11.873 20:55:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:11.873 [2024-07-15 20:55:39.012912] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:11.873 [2024-07-15 20:55:39.012993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid759949 ] 00:05:11.873 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.873 [2024-07-15 20:55:39.080375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.873 [2024-07-15 20:55:39.153299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.146 20:55:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:17.146 20:55:43 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:17.146 20:55:43 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:17.146 20:55:43 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:17.146 20:55:43 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:17.146 20:55:43 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:17.146 20:55:43 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:17.146 20:55:43 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:17.146 20:55:43 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.146 20:55:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.146 20:55:44 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:17.146 20:55:44 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:17.146 20:55:44 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:17.146 20:55:44 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:17.146 20:55:44 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:17.146 20:55:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:17.146 20:55:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 759949 00:05:17.146 20:55:44 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 759949 ']' 00:05:17.146 20:55:44 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 759949 00:05:17.146 20:55:44 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:17.146 20:55:44 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:17.146 20:55:44 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 759949 00:05:17.146 20:55:44 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:17.146 20:55:44 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:17.146 20:55:44 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 759949' 00:05:17.146 killing process with pid 759949 00:05:17.146 20:55:44 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 759949 00:05:17.146 20:55:44 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 759949 00:05:17.146 00:05:17.146 real 0m5.367s 00:05:17.146 user 0m5.134s 00:05:17.146 sys 0m0.273s 00:05:17.146 20:55:44 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.146 20:55:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.146 ************************************ 00:05:17.146 END TEST skip_rpc 00:05:17.146 ************************************ 00:05:17.146 20:55:44 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:17.146 20:55:44 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:17.146 20:55:44 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.146 20:55:44 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.146 20:55:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.146 ************************************ 00:05:17.146 START TEST skip_rpc_with_json 00:05:17.146 ************************************ 00:05:17.146 20:55:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:17.146 20:55:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:17.146 20:55:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=760789 00:05:17.146 20:55:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.146 20:55:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.146 20:55:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 760789 00:05:17.146 20:55:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 760789 ']' 00:05:17.146 20:55:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.146 20:55:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.146 20:55:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.146 20:55:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.146 20:55:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:17.406 [2024-07-15 20:55:44.458129] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:17.406 [2024-07-15 20:55:44.458189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid760789 ] 00:05:17.406 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.406 [2024-07-15 20:55:44.526176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.406 [2024-07-15 20:55:44.592895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.343 20:55:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.343 20:55:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:18.343 20:55:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:18.343 20:55:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.343 20:55:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:18.343 [2024-07-15 20:55:45.282545] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:18.343 request: 00:05:18.343 { 00:05:18.343 "trtype": "tcp", 00:05:18.343 "method": "nvmf_get_transports", 00:05:18.343 "req_id": 1 00:05:18.343 } 00:05:18.343 Got JSON-RPC error response 00:05:18.343 response: 00:05:18.343 { 00:05:18.343 "code": -19, 00:05:18.343 "message": "No such device" 00:05:18.343 } 00:05:18.343 20:55:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:18.343 20:55:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:18.343 20:55:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.343 20:55:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:18.344 [2024-07-15 20:55:45.294649] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:18.344 20:55:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.344 20:55:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:18.344 20:55:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.344 20:55:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:18.344 20:55:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.344 20:55:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:18.344 { 00:05:18.344 "subsystems": [ 00:05:18.344 { 00:05:18.344 "subsystem": "scheduler", 00:05:18.344 "config": [ 00:05:18.344 { 00:05:18.344 "method": "framework_set_scheduler", 00:05:18.344 "params": { 00:05:18.344 "name": "static" 00:05:18.344 } 00:05:18.344 } 00:05:18.344 ] 00:05:18.344 }, 00:05:18.344 { 00:05:18.344 "subsystem": "vmd", 00:05:18.344 "config": [] 00:05:18.344 }, 00:05:18.344 { 00:05:18.344 "subsystem": "sock", 00:05:18.344 "config": [ 00:05:18.344 { 00:05:18.344 "method": "sock_set_default_impl", 00:05:18.344 "params": { 00:05:18.344 "impl_name": "posix" 00:05:18.344 } 00:05:18.344 }, 00:05:18.344 { 00:05:18.344 "method": "sock_impl_set_options", 00:05:18.344 "params": { 00:05:18.344 "impl_name": "ssl", 00:05:18.344 "recv_buf_size": 4096, 00:05:18.344 "send_buf_size": 4096, 00:05:18.344 "enable_recv_pipe": true, 00:05:18.344 "enable_quickack": false, 00:05:18.344 "enable_placement_id": 0, 00:05:18.344 "enable_zerocopy_send_server": true, 00:05:18.344 "enable_zerocopy_send_client": false, 00:05:18.344 "zerocopy_threshold": 0, 00:05:18.344 "tls_version": 0, 00:05:18.344 "enable_ktls": false 00:05:18.344 } 00:05:18.344 }, 00:05:18.344 { 00:05:18.344 "method": "sock_impl_set_options", 00:05:18.344 "params": { 00:05:18.344 "impl_name": "posix", 00:05:18.344 "recv_buf_size": 2097152, 00:05:18.344 "send_buf_size": 2097152, 00:05:18.344 "enable_recv_pipe": true, 00:05:18.344 "enable_quickack": false, 00:05:18.344 "enable_placement_id": 0, 00:05:18.344 "enable_zerocopy_send_server": true, 00:05:18.344 "enable_zerocopy_send_client": false, 00:05:18.344 "zerocopy_threshold": 0, 00:05:18.344 "tls_version": 0, 00:05:18.344 "enable_ktls": false 00:05:18.344 } 00:05:18.344 } 00:05:18.344 ] 00:05:18.344 }, 00:05:18.344 { 00:05:18.344 "subsystem": "iobuf", 00:05:18.344 "config": [ 00:05:18.344 { 00:05:18.344 "method": "iobuf_set_options", 00:05:18.344 "params": { 00:05:18.344 "small_pool_count": 8192, 00:05:18.344 "large_pool_count": 1024, 00:05:18.344 "small_bufsize": 8192, 00:05:18.344 "large_bufsize": 135168 00:05:18.344 } 00:05:18.344 } 00:05:18.344 ] 00:05:18.344 }, 00:05:18.344 { 00:05:18.344 "subsystem": "keyring", 00:05:18.344 "config": [] 00:05:18.344 }, 00:05:18.344 { 00:05:18.344 "subsystem": "vfio_user_target", 00:05:18.344 "config": null 00:05:18.344 }, 00:05:18.344 { 00:05:18.344 "subsystem": "accel", 00:05:18.344 "config": [ 00:05:18.344 { 00:05:18.344 "method": "accel_set_options", 00:05:18.344 "params": { 00:05:18.344 "small_cache_size": 128, 00:05:18.344 "large_cache_size": 16, 00:05:18.344 "task_count": 2048, 00:05:18.344 "sequence_count": 2048, 00:05:18.344 "buf_count": 2048 00:05:18.344 } 00:05:18.344 } 00:05:18.344 ] 00:05:18.344 }, 00:05:18.344 { 00:05:18.344 "subsystem": "bdev", 00:05:18.344 "config": [ 00:05:18.344 { 00:05:18.344 "method": "bdev_set_options", 00:05:18.344 "params": { 00:05:18.344 "bdev_io_pool_size": 65535, 00:05:18.344 "bdev_io_cache_size": 256, 00:05:18.344 "bdev_auto_examine": true, 00:05:18.344 "iobuf_small_cache_size": 128, 00:05:18.344 "iobuf_large_cache_size": 16 00:05:18.344 } 00:05:18.344 }, 00:05:18.344 { 00:05:18.344 "method": "bdev_raid_set_options", 00:05:18.344 "params": { 00:05:18.344 "process_window_size_kb": 1024 00:05:18.344 } 00:05:18.344 }, 00:05:18.344 { 00:05:18.344 "method": "bdev_nvme_set_options", 00:05:18.344 "params": { 00:05:18.344 "action_on_timeout": "none", 00:05:18.344 "timeout_us": 0, 00:05:18.344 "timeout_admin_us": 0, 00:05:18.344 "keep_alive_timeout_ms": 10000, 00:05:18.344 "arbitration_burst": 0, 00:05:18.344 "low_priority_weight": 0, 00:05:18.344 "medium_priority_weight": 0, 00:05:18.344 "high_priority_weight": 0, 00:05:18.344 "nvme_adminq_poll_period_us": 10000, 00:05:18.344 "nvme_ioq_poll_period_us": 0, 00:05:18.344 "io_queue_requests": 0, 00:05:18.344 "delay_cmd_submit": true, 00:05:18.344 "transport_retry_count": 4, 00:05:18.344 "bdev_retry_count": 3, 00:05:18.344 "transport_ack_timeout": 0, 00:05:18.344 "ctrlr_loss_timeout_sec": 0, 00:05:18.344 "reconnect_delay_sec": 0, 00:05:18.344 "fast_io_fail_timeout_sec": 0, 00:05:18.344 "disable_auto_failback": false, 00:05:18.344 "generate_uuids": false, 00:05:18.344 "transport_tos": 0, 00:05:18.344 "nvme_error_stat": false, 00:05:18.344 "rdma_srq_size": 0, 00:05:18.344 "io_path_stat": false, 00:05:18.344 "allow_accel_sequence": false, 00:05:18.344 "rdma_max_cq_size": 0, 00:05:18.344 "rdma_cm_event_timeout_ms": 0, 00:05:18.344 "dhchap_digests": [ 00:05:18.344 "sha256", 00:05:18.344 "sha384", 00:05:18.344 "sha512" 00:05:18.344 ], 00:05:18.344 "dhchap_dhgroups": [ 00:05:18.344 "null", 00:05:18.344 "ffdhe2048", 00:05:18.344 "ffdhe3072", 00:05:18.344 "ffdhe4096", 00:05:18.344 "ffdhe6144", 00:05:18.344 "ffdhe8192" 00:05:18.344 ] 00:05:18.344 } 00:05:18.344 }, 00:05:18.344 { 00:05:18.344 "method": "bdev_nvme_set_hotplug", 00:05:18.344 "params": { 00:05:18.344 "period_us": 100000, 00:05:18.344 "enable": false 00:05:18.344 } 00:05:18.344 }, 00:05:18.344 { 00:05:18.344 "method": "bdev_iscsi_set_options", 00:05:18.344 "params": { 00:05:18.344 "timeout_sec": 30 00:05:18.344 } 00:05:18.344 }, 00:05:18.344 { 00:05:18.344 "method": "bdev_wait_for_examine" 00:05:18.344 } 00:05:18.344 ] 00:05:18.344 }, 00:05:18.344 { 00:05:18.344 "subsystem": "nvmf", 00:05:18.344 "config": [ 00:05:18.344 { 00:05:18.344 "method": "nvmf_set_config", 00:05:18.344 "params": { 00:05:18.344 "discovery_filter": "match_any", 00:05:18.344 "admin_cmd_passthru": { 00:05:18.344 "identify_ctrlr": false 00:05:18.344 } 00:05:18.344 } 00:05:18.344 }, 00:05:18.344 { 00:05:18.344 "method": "nvmf_set_max_subsystems", 00:05:18.344 "params": { 00:05:18.344 "max_subsystems": 1024 00:05:18.344 } 00:05:18.344 }, 00:05:18.344 { 00:05:18.344 "method": "nvmf_set_crdt", 00:05:18.344 "params": { 00:05:18.344 "crdt1": 0, 00:05:18.344 "crdt2": 0, 00:05:18.344 "crdt3": 0 00:05:18.344 } 00:05:18.344 }, 00:05:18.344 { 00:05:18.344 "method": "nvmf_create_transport", 00:05:18.344 "params": { 00:05:18.344 "trtype": "TCP", 00:05:18.344 "max_queue_depth": 128, 00:05:18.344 "max_io_qpairs_per_ctrlr": 127, 00:05:18.344 "in_capsule_data_size": 4096, 00:05:18.344 "max_io_size": 131072, 00:05:18.344 "io_unit_size": 131072, 00:05:18.344 "max_aq_depth": 128, 00:05:18.344 "num_shared_buffers": 511, 00:05:18.344 "buf_cache_size": 4294967295, 00:05:18.344 "dif_insert_or_strip": false, 00:05:18.344 "zcopy": false, 00:05:18.344 "c2h_success": true, 00:05:18.344 "sock_priority": 0, 00:05:18.344 "abort_timeout_sec": 1, 00:05:18.344 "ack_timeout": 0, 00:05:18.344 "data_wr_pool_size": 0 00:05:18.344 } 00:05:18.344 } 00:05:18.344 ] 00:05:18.344 }, 00:05:18.344 { 00:05:18.344 "subsystem": "nbd", 00:05:18.344 "config": [] 00:05:18.344 }, 00:05:18.344 { 00:05:18.344 "subsystem": "ublk", 00:05:18.344 "config": [] 00:05:18.344 }, 00:05:18.344 { 00:05:18.344 "subsystem": "vhost_blk", 00:05:18.344 "config": [] 00:05:18.344 }, 00:05:18.344 { 00:05:18.344 "subsystem": "scsi", 00:05:18.344 "config": null 00:05:18.344 }, 00:05:18.344 { 00:05:18.344 "subsystem": "iscsi", 00:05:18.344 "config": [ 00:05:18.344 { 00:05:18.344 "method": "iscsi_set_options", 00:05:18.344 "params": { 00:05:18.344 "node_base": "iqn.2016-06.io.spdk", 00:05:18.344 "max_sessions": 128, 00:05:18.344 "max_connections_per_session": 2, 00:05:18.344 "max_queue_depth": 64, 00:05:18.344 "default_time2wait": 2, 00:05:18.344 "default_time2retain": 20, 00:05:18.344 "first_burst_length": 8192, 00:05:18.344 "immediate_data": true, 00:05:18.344 "allow_duplicated_isid": false, 00:05:18.344 "error_recovery_level": 0, 00:05:18.344 "nop_timeout": 60, 00:05:18.344 "nop_in_interval": 30, 00:05:18.344 "disable_chap": false, 00:05:18.344 "require_chap": false, 00:05:18.344 "mutual_chap": false, 00:05:18.344 "chap_group": 0, 00:05:18.344 "max_large_datain_per_connection": 64, 00:05:18.344 "max_r2t_per_connection": 4, 00:05:18.344 "pdu_pool_size": 36864, 00:05:18.344 "immediate_data_pool_size": 16384, 00:05:18.344 "data_out_pool_size": 2048 00:05:18.344 } 00:05:18.344 } 00:05:18.344 ] 00:05:18.344 }, 00:05:18.344 { 00:05:18.344 "subsystem": "vhost_scsi", 00:05:18.344 "config": [] 00:05:18.344 } 00:05:18.345 ] 00:05:18.345 } 00:05:18.345 20:55:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:18.345 20:55:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 760789 00:05:18.345 20:55:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 760789 ']' 00:05:18.345 20:55:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 760789 00:05:18.345 20:55:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:18.345 20:55:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:18.345 20:55:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 760789 00:05:18.345 20:55:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:18.345 20:55:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:18.345 20:55:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 760789' 00:05:18.345 killing process with pid 760789 00:05:18.345 20:55:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 760789 00:05:18.345 20:55:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 760789 00:05:18.604 20:55:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=761067 00:05:18.604 20:55:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:18.604 20:55:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:23.878 20:55:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 761067 00:05:23.878 20:55:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 761067 ']' 00:05:23.878 20:55:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 761067 00:05:23.878 20:55:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:23.878 20:55:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.878 20:55:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 761067 00:05:23.878 20:55:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:23.878 20:55:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:23.878 20:55:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 761067' 00:05:23.878 killing process with pid 761067 00:05:23.878 20:55:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 761067 00:05:23.878 20:55:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 761067 00:05:24.139 20:55:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:24.139 20:55:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:24.139 00:05:24.139 real 0m6.762s 00:05:24.139 user 0m6.572s 00:05:24.139 sys 0m0.635s 00:05:24.139 20:55:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.139 20:55:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:24.139 ************************************ 00:05:24.139 END TEST skip_rpc_with_json 00:05:24.139 ************************************ 00:05:24.139 20:55:51 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:24.139 20:55:51 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:24.139 20:55:51 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.139 20:55:51 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.139 20:55:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.139 ************************************ 00:05:24.139 START TEST skip_rpc_with_delay 00:05:24.139 ************************************ 00:05:24.139 20:55:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:24.139 20:55:51 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:24.139 20:55:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:24.139 20:55:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:24.139 20:55:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.139 20:55:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.139 20:55:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.139 20:55:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.139 20:55:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.139 20:55:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.139 20:55:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.139 20:55:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:24.139 20:55:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:24.139 [2024-07-15 20:55:51.302243] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:24.139 [2024-07-15 20:55:51.302374] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:24.139 20:55:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:24.139 20:55:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:24.139 20:55:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:24.139 20:55:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:24.139 00:05:24.139 real 0m0.047s 00:05:24.139 user 0m0.017s 00:05:24.139 sys 0m0.029s 00:05:24.139 20:55:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.139 20:55:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:24.139 ************************************ 00:05:24.139 END TEST skip_rpc_with_delay 00:05:24.139 ************************************ 00:05:24.139 20:55:51 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:24.139 20:55:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:24.139 20:55:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:24.139 20:55:51 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:24.139 20:55:51 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.139 20:55:51 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.139 20:55:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.139 ************************************ 00:05:24.139 START TEST exit_on_failed_rpc_init 00:05:24.139 ************************************ 00:05:24.139 20:55:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:24.139 20:55:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=762176 00:05:24.139 20:55:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 762176 00:05:24.139 20:55:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:24.139 20:55:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 762176 ']' 00:05:24.139 20:55:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.139 20:55:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.139 20:55:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.139 20:55:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.139 20:55:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:24.400 [2024-07-15 20:55:51.430739] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:24.400 [2024-07-15 20:55:51.430808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid762176 ] 00:05:24.400 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.400 [2024-07-15 20:55:51.498529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.400 [2024-07-15 20:55:51.575958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.970 20:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.970 20:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:24.970 20:55:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.970 20:55:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:24.970 20:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:24.970 20:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:24.970 20:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.970 20:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.970 20:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.970 20:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.970 20:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.970 20:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.970 20:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.970 20:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:24.970 20:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:25.230 [2024-07-15 20:55:52.275923] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:25.230 [2024-07-15 20:55:52.275985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid762200 ] 00:05:25.230 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.230 [2024-07-15 20:55:52.343098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.230 [2024-07-15 20:55:52.417341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.230 [2024-07-15 20:55:52.417433] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:25.230 [2024-07-15 20:55:52.417451] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:25.230 [2024-07-15 20:55:52.417459] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:25.230 20:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:25.230 20:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:25.230 20:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:25.230 20:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:25.230 20:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:25.230 20:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:25.230 20:55:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:25.230 20:55:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 762176 00:05:25.230 20:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 762176 ']' 00:05:25.230 20:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 762176 00:05:25.230 20:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:25.230 20:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:25.230 20:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 762176 00:05:25.490 20:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:25.490 20:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:25.490 20:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 762176' 00:05:25.490 killing process with pid 762176 00:05:25.490 20:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 762176 00:05:25.490 20:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 762176 00:05:25.750 00:05:25.750 real 0m1.436s 00:05:25.750 user 0m1.611s 00:05:25.750 sys 0m0.431s 00:05:25.750 20:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.750 20:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:25.750 ************************************ 00:05:25.750 END TEST exit_on_failed_rpc_init 00:05:25.750 ************************************ 00:05:25.750 20:55:52 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:25.750 20:55:52 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:25.750 00:05:25.750 real 0m14.043s 00:05:25.750 user 0m13.477s 00:05:25.750 sys 0m1.690s 00:05:25.750 20:55:52 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.750 20:55:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.750 ************************************ 00:05:25.750 END TEST skip_rpc 00:05:25.750 ************************************ 00:05:25.750 20:55:52 -- common/autotest_common.sh@1142 -- # return 0 00:05:25.750 20:55:52 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:25.750 20:55:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.750 20:55:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.750 20:55:52 -- common/autotest_common.sh@10 -- # set +x 00:05:25.750 ************************************ 00:05:25.750 START TEST rpc_client 00:05:25.750 ************************************ 00:05:25.750 20:55:52 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:26.011 * Looking for test storage... 00:05:26.011 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:05:26.011 20:55:53 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:26.011 OK 00:05:26.011 20:55:53 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:26.011 00:05:26.011 real 0m0.131s 00:05:26.011 user 0m0.048s 00:05:26.011 sys 0m0.094s 00:05:26.011 20:55:53 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.011 20:55:53 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:26.011 ************************************ 00:05:26.011 END TEST rpc_client 00:05:26.011 ************************************ 00:05:26.011 20:55:53 -- common/autotest_common.sh@1142 -- # return 0 00:05:26.011 20:55:53 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:26.011 20:55:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.011 20:55:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.011 20:55:53 -- common/autotest_common.sh@10 -- # set +x 00:05:26.011 ************************************ 00:05:26.011 START TEST json_config 00:05:26.011 ************************************ 00:05:26.011 20:55:53 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:26.011 20:55:53 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:26.011 20:55:53 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:26.011 20:55:53 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:26.011 20:55:53 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:26.011 20:55:53 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:26.011 20:55:53 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:26.011 20:55:53 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:26.011 20:55:53 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:26.011 20:55:53 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:26.011 20:55:53 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:26.011 20:55:53 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:26.011 20:55:53 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:26.011 20:55:53 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:05:26.011 20:55:53 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:05:26.011 20:55:53 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:26.011 20:55:53 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:26.011 20:55:53 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:26.011 20:55:53 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:26.011 20:55:53 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:26.011 20:55:53 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:26.011 20:55:53 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:26.011 20:55:53 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:26.011 20:55:53 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.011 20:55:53 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.011 20:55:53 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.011 20:55:53 json_config -- paths/export.sh@5 -- # export PATH 00:05:26.012 20:55:53 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.012 20:55:53 json_config -- nvmf/common.sh@47 -- # : 0 00:05:26.012 20:55:53 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:26.012 20:55:53 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:26.012 20:55:53 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:26.012 20:55:53 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:26.012 20:55:53 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:26.012 20:55:53 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:26.012 20:55:53 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:26.012 20:55:53 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:26.012 20:55:53 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:05:26.012 20:55:53 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:26.012 20:55:53 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:26.012 20:55:53 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:26.012 20:55:53 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:26.012 20:55:53 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:26.012 WARNING: No tests are enabled so not running JSON configuration tests 00:05:26.012 20:55:53 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:26.012 00:05:26.012 real 0m0.109s 00:05:26.012 user 0m0.057s 00:05:26.012 sys 0m0.054s 00:05:26.012 20:55:53 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.012 20:55:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.012 ************************************ 00:05:26.012 END TEST json_config 00:05:26.012 ************************************ 00:05:26.272 20:55:53 -- common/autotest_common.sh@1142 -- # return 0 00:05:26.272 20:55:53 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:26.272 20:55:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.272 20:55:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.272 20:55:53 -- common/autotest_common.sh@10 -- # set +x 00:05:26.272 ************************************ 00:05:26.272 START TEST json_config_extra_key 00:05:26.272 ************************************ 00:05:26.272 20:55:53 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:26.272 20:55:53 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:26.272 20:55:53 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:26.272 20:55:53 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:26.272 20:55:53 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:26.272 20:55:53 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:26.272 20:55:53 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:26.272 20:55:53 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:26.272 20:55:53 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:26.272 20:55:53 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:26.272 20:55:53 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:26.272 20:55:53 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:26.272 20:55:53 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:26.272 20:55:53 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:05:26.272 20:55:53 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:05:26.272 20:55:53 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:26.272 20:55:53 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:26.272 20:55:53 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:26.272 20:55:53 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:26.272 20:55:53 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:26.272 20:55:53 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:26.272 20:55:53 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:26.272 20:55:53 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:26.272 20:55:53 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.272 20:55:53 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.272 20:55:53 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.272 20:55:53 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:26.272 20:55:53 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.272 20:55:53 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:26.272 20:55:53 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:26.272 20:55:53 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:26.272 20:55:53 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:26.272 20:55:53 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:26.272 20:55:53 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:26.272 20:55:53 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:26.272 20:55:53 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:26.272 20:55:53 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:26.272 20:55:53 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:05:26.272 20:55:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:26.272 20:55:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:26.272 20:55:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:26.272 20:55:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:26.273 20:55:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:26.273 20:55:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:26.273 20:55:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:26.273 20:55:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:26.273 20:55:53 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:26.273 20:55:53 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:26.273 INFO: launching applications... 00:05:26.273 20:55:53 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:26.273 20:55:53 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:26.273 20:55:53 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:26.273 20:55:53 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:26.273 20:55:53 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:26.273 20:55:53 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:26.273 20:55:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.273 20:55:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.273 20:55:53 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=762604 00:05:26.273 20:55:53 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:26.273 Waiting for target to run... 00:05:26.273 20:55:53 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 762604 /var/tmp/spdk_tgt.sock 00:05:26.273 20:55:53 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 762604 ']' 00:05:26.273 20:55:53 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:26.273 20:55:53 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:26.273 20:55:53 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.273 20:55:53 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:26.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:26.273 20:55:53 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.273 20:55:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:26.273 [2024-07-15 20:55:53.500910] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:26.273 [2024-07-15 20:55:53.500997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid762604 ] 00:05:26.273 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.842 [2024-07-15 20:55:53.942928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.842 [2024-07-15 20:55:54.032284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.102 20:55:54 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.102 20:55:54 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:27.102 20:55:54 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:27.102 00:05:27.102 20:55:54 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:27.102 INFO: shutting down applications... 00:05:27.102 20:55:54 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:27.102 20:55:54 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:27.102 20:55:54 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:27.102 20:55:54 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 762604 ]] 00:05:27.102 20:55:54 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 762604 00:05:27.102 20:55:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:27.102 20:55:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.102 20:55:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 762604 00:05:27.102 20:55:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:27.672 20:55:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:27.672 20:55:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.672 20:55:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 762604 00:05:27.672 20:55:54 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:27.672 20:55:54 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:27.672 20:55:54 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:27.672 20:55:54 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:27.672 SPDK target shutdown done 00:05:27.672 20:55:54 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:27.672 Success 00:05:27.672 00:05:27.672 real 0m1.461s 00:05:27.672 user 0m1.031s 00:05:27.672 sys 0m0.565s 00:05:27.672 20:55:54 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.672 20:55:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:27.672 ************************************ 00:05:27.672 END TEST json_config_extra_key 00:05:27.672 ************************************ 00:05:27.672 20:55:54 -- common/autotest_common.sh@1142 -- # return 0 00:05:27.672 20:55:54 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:27.672 20:55:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.672 20:55:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.672 20:55:54 -- common/autotest_common.sh@10 -- # set +x 00:05:27.672 ************************************ 00:05:27.672 START TEST alias_rpc 00:05:27.672 ************************************ 00:05:27.672 20:55:54 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:27.932 * Looking for test storage... 00:05:27.932 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:05:27.932 20:55:55 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:27.932 20:55:55 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=762921 00:05:27.932 20:55:55 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.932 20:55:55 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 762921 00:05:27.932 20:55:55 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 762921 ']' 00:05:27.932 20:55:55 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.932 20:55:55 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.932 20:55:55 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.932 20:55:55 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.932 20:55:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.932 [2024-07-15 20:55:55.030401] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:27.932 [2024-07-15 20:55:55.030465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid762921 ] 00:05:27.932 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.932 [2024-07-15 20:55:55.098347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.932 [2024-07-15 20:55:55.171597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.871 20:55:55 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.871 20:55:55 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:28.871 20:55:55 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:28.871 20:55:56 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 762921 00:05:28.871 20:55:56 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 762921 ']' 00:05:28.871 20:55:56 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 762921 00:05:28.871 20:55:56 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:28.871 20:55:56 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:28.871 20:55:56 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 762921 00:05:28.871 20:55:56 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:28.871 20:55:56 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:28.871 20:55:56 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 762921' 00:05:28.871 killing process with pid 762921 00:05:28.871 20:55:56 alias_rpc -- common/autotest_common.sh@967 -- # kill 762921 00:05:28.872 20:55:56 alias_rpc -- common/autotest_common.sh@972 -- # wait 762921 00:05:29.130 00:05:29.130 real 0m1.479s 00:05:29.130 user 0m1.575s 00:05:29.130 sys 0m0.445s 00:05:29.130 20:55:56 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.130 20:55:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.130 ************************************ 00:05:29.130 END TEST alias_rpc 00:05:29.130 ************************************ 00:05:29.391 20:55:56 -- common/autotest_common.sh@1142 -- # return 0 00:05:29.391 20:55:56 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:29.391 20:55:56 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:29.391 20:55:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.391 20:55:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.391 20:55:56 -- common/autotest_common.sh@10 -- # set +x 00:05:29.391 ************************************ 00:05:29.391 START TEST spdkcli_tcp 00:05:29.391 ************************************ 00:05:29.391 20:55:56 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:29.391 * Looking for test storage... 00:05:29.391 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:05:29.391 20:55:56 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:05:29.391 20:55:56 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:29.391 20:55:56 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:05:29.391 20:55:56 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:29.391 20:55:56 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:29.391 20:55:56 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:29.391 20:55:56 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:29.391 20:55:56 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:29.391 20:55:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:29.391 20:55:56 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=763238 00:05:29.391 20:55:56 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 763238 00:05:29.391 20:55:56 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:29.391 20:55:56 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 763238 ']' 00:05:29.391 20:55:56 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.391 20:55:56 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.391 20:55:56 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.391 20:55:56 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.391 20:55:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:29.391 [2024-07-15 20:55:56.579635] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:29.391 [2024-07-15 20:55:56.579709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid763238 ] 00:05:29.391 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.391 [2024-07-15 20:55:56.648487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.650 [2024-07-15 20:55:56.726568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.650 [2024-07-15 20:55:56.726571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.219 20:55:57 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.219 20:55:57 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:30.219 20:55:57 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:30.219 20:55:57 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=763456 00:05:30.219 20:55:57 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:30.479 [ 00:05:30.479 "spdk_get_version", 00:05:30.479 "rpc_get_methods", 00:05:30.479 "trace_get_info", 00:05:30.479 "trace_get_tpoint_group_mask", 00:05:30.479 "trace_disable_tpoint_group", 00:05:30.479 "trace_enable_tpoint_group", 00:05:30.479 "trace_clear_tpoint_mask", 00:05:30.479 "trace_set_tpoint_mask", 00:05:30.479 "vfu_tgt_set_base_path", 00:05:30.479 "framework_get_pci_devices", 00:05:30.479 "framework_get_config", 00:05:30.479 "framework_get_subsystems", 00:05:30.479 "keyring_get_keys", 00:05:30.479 "iobuf_get_stats", 00:05:30.479 "iobuf_set_options", 00:05:30.479 "sock_get_default_impl", 00:05:30.479 "sock_set_default_impl", 00:05:30.479 "sock_impl_set_options", 00:05:30.479 "sock_impl_get_options", 00:05:30.479 "vmd_rescan", 00:05:30.479 "vmd_remove_device", 00:05:30.479 "vmd_enable", 00:05:30.479 "accel_get_stats", 00:05:30.479 "accel_set_options", 00:05:30.479 "accel_set_driver", 00:05:30.479 "accel_crypto_key_destroy", 00:05:30.479 "accel_crypto_keys_get", 00:05:30.479 "accel_crypto_key_create", 00:05:30.479 "accel_assign_opc", 00:05:30.479 "accel_get_module_info", 00:05:30.479 "accel_get_opc_assignments", 00:05:30.480 "notify_get_notifications", 00:05:30.480 "notify_get_types", 00:05:30.480 "bdev_get_histogram", 00:05:30.480 "bdev_enable_histogram", 00:05:30.480 "bdev_set_qos_limit", 00:05:30.480 "bdev_set_qd_sampling_period", 00:05:30.480 "bdev_get_bdevs", 00:05:30.480 "bdev_reset_iostat", 00:05:30.480 "bdev_get_iostat", 00:05:30.480 "bdev_examine", 00:05:30.480 "bdev_wait_for_examine", 00:05:30.480 "bdev_set_options", 00:05:30.480 "scsi_get_devices", 00:05:30.480 "thread_set_cpumask", 00:05:30.480 "framework_get_governor", 00:05:30.480 "framework_get_scheduler", 00:05:30.480 "framework_set_scheduler", 00:05:30.480 "framework_get_reactors", 00:05:30.480 "thread_get_io_channels", 00:05:30.480 "thread_get_pollers", 00:05:30.480 "thread_get_stats", 00:05:30.480 "framework_monitor_context_switch", 00:05:30.480 "spdk_kill_instance", 00:05:30.480 "log_enable_timestamps", 00:05:30.480 "log_get_flags", 00:05:30.480 "log_clear_flag", 00:05:30.480 "log_set_flag", 00:05:30.480 "log_get_level", 00:05:30.480 "log_set_level", 00:05:30.480 "log_get_print_level", 00:05:30.480 "log_set_print_level", 00:05:30.480 "framework_enable_cpumask_locks", 00:05:30.480 "framework_disable_cpumask_locks", 00:05:30.480 "framework_wait_init", 00:05:30.480 "framework_start_init", 00:05:30.480 "virtio_blk_create_transport", 00:05:30.480 "virtio_blk_get_transports", 00:05:30.480 "vhost_controller_set_coalescing", 00:05:30.480 "vhost_get_controllers", 00:05:30.480 "vhost_delete_controller", 00:05:30.480 "vhost_create_blk_controller", 00:05:30.480 "vhost_scsi_controller_remove_target", 00:05:30.480 "vhost_scsi_controller_add_target", 00:05:30.480 "vhost_start_scsi_controller", 00:05:30.480 "vhost_create_scsi_controller", 00:05:30.480 "ublk_recover_disk", 00:05:30.480 "ublk_get_disks", 00:05:30.480 "ublk_stop_disk", 00:05:30.480 "ublk_start_disk", 00:05:30.480 "ublk_destroy_target", 00:05:30.480 "ublk_create_target", 00:05:30.480 "nbd_get_disks", 00:05:30.480 "nbd_stop_disk", 00:05:30.480 "nbd_start_disk", 00:05:30.480 "env_dpdk_get_mem_stats", 00:05:30.480 "nvmf_stop_mdns_prr", 00:05:30.480 "nvmf_publish_mdns_prr", 00:05:30.480 "nvmf_subsystem_get_listeners", 00:05:30.480 "nvmf_subsystem_get_qpairs", 00:05:30.480 "nvmf_subsystem_get_controllers", 00:05:30.480 "nvmf_get_stats", 00:05:30.480 "nvmf_get_transports", 00:05:30.480 "nvmf_create_transport", 00:05:30.480 "nvmf_get_targets", 00:05:30.480 "nvmf_delete_target", 00:05:30.480 "nvmf_create_target", 00:05:30.480 "nvmf_subsystem_allow_any_host", 00:05:30.480 "nvmf_subsystem_remove_host", 00:05:30.480 "nvmf_subsystem_add_host", 00:05:30.480 "nvmf_ns_remove_host", 00:05:30.480 "nvmf_ns_add_host", 00:05:30.480 "nvmf_subsystem_remove_ns", 00:05:30.480 "nvmf_subsystem_add_ns", 00:05:30.480 "nvmf_subsystem_listener_set_ana_state", 00:05:30.480 "nvmf_discovery_get_referrals", 00:05:30.480 "nvmf_discovery_remove_referral", 00:05:30.480 "nvmf_discovery_add_referral", 00:05:30.480 "nvmf_subsystem_remove_listener", 00:05:30.480 "nvmf_subsystem_add_listener", 00:05:30.480 "nvmf_delete_subsystem", 00:05:30.480 "nvmf_create_subsystem", 00:05:30.480 "nvmf_get_subsystems", 00:05:30.480 "nvmf_set_crdt", 00:05:30.480 "nvmf_set_config", 00:05:30.480 "nvmf_set_max_subsystems", 00:05:30.480 "iscsi_get_histogram", 00:05:30.480 "iscsi_enable_histogram", 00:05:30.480 "iscsi_set_options", 00:05:30.480 "iscsi_get_auth_groups", 00:05:30.480 "iscsi_auth_group_remove_secret", 00:05:30.480 "iscsi_auth_group_add_secret", 00:05:30.480 "iscsi_delete_auth_group", 00:05:30.480 "iscsi_create_auth_group", 00:05:30.480 "iscsi_set_discovery_auth", 00:05:30.480 "iscsi_get_options", 00:05:30.480 "iscsi_target_node_request_logout", 00:05:30.480 "iscsi_target_node_set_redirect", 00:05:30.480 "iscsi_target_node_set_auth", 00:05:30.480 "iscsi_target_node_add_lun", 00:05:30.480 "iscsi_get_stats", 00:05:30.480 "iscsi_get_connections", 00:05:30.480 "iscsi_portal_group_set_auth", 00:05:30.480 "iscsi_start_portal_group", 00:05:30.480 "iscsi_delete_portal_group", 00:05:30.480 "iscsi_create_portal_group", 00:05:30.480 "iscsi_get_portal_groups", 00:05:30.480 "iscsi_delete_target_node", 00:05:30.480 "iscsi_target_node_remove_pg_ig_maps", 00:05:30.480 "iscsi_target_node_add_pg_ig_maps", 00:05:30.480 "iscsi_create_target_node", 00:05:30.480 "iscsi_get_target_nodes", 00:05:30.480 "iscsi_delete_initiator_group", 00:05:30.480 "iscsi_initiator_group_remove_initiators", 00:05:30.480 "iscsi_initiator_group_add_initiators", 00:05:30.480 "iscsi_create_initiator_group", 00:05:30.480 "iscsi_get_initiator_groups", 00:05:30.480 "keyring_linux_set_options", 00:05:30.480 "keyring_file_remove_key", 00:05:30.480 "keyring_file_add_key", 00:05:30.480 "vfu_virtio_create_scsi_endpoint", 00:05:30.480 "vfu_virtio_scsi_remove_target", 00:05:30.480 "vfu_virtio_scsi_add_target", 00:05:30.480 "vfu_virtio_create_blk_endpoint", 00:05:30.480 "vfu_virtio_delete_endpoint", 00:05:30.480 "iaa_scan_accel_module", 00:05:30.480 "dsa_scan_accel_module", 00:05:30.480 "ioat_scan_accel_module", 00:05:30.480 "accel_error_inject_error", 00:05:30.480 "bdev_iscsi_delete", 00:05:30.480 "bdev_iscsi_create", 00:05:30.480 "bdev_iscsi_set_options", 00:05:30.480 "bdev_virtio_attach_controller", 00:05:30.480 "bdev_virtio_scsi_get_devices", 00:05:30.480 "bdev_virtio_detach_controller", 00:05:30.480 "bdev_virtio_blk_set_hotplug", 00:05:30.480 "bdev_ftl_set_property", 00:05:30.480 "bdev_ftl_get_properties", 00:05:30.480 "bdev_ftl_get_stats", 00:05:30.480 "bdev_ftl_unmap", 00:05:30.480 "bdev_ftl_unload", 00:05:30.480 "bdev_ftl_delete", 00:05:30.480 "bdev_ftl_load", 00:05:30.480 "bdev_ftl_create", 00:05:30.480 "bdev_aio_delete", 00:05:30.480 "bdev_aio_rescan", 00:05:30.480 "bdev_aio_create", 00:05:30.480 "blobfs_create", 00:05:30.480 "blobfs_detect", 00:05:30.480 "blobfs_set_cache_size", 00:05:30.480 "bdev_zone_block_delete", 00:05:30.480 "bdev_zone_block_create", 00:05:30.480 "bdev_delay_delete", 00:05:30.480 "bdev_delay_create", 00:05:30.480 "bdev_delay_update_latency", 00:05:30.480 "bdev_split_delete", 00:05:30.480 "bdev_split_create", 00:05:30.480 "bdev_error_inject_error", 00:05:30.480 "bdev_error_delete", 00:05:30.480 "bdev_error_create", 00:05:30.480 "bdev_raid_set_options", 00:05:30.480 "bdev_raid_remove_base_bdev", 00:05:30.480 "bdev_raid_add_base_bdev", 00:05:30.480 "bdev_raid_delete", 00:05:30.480 "bdev_raid_create", 00:05:30.480 "bdev_raid_get_bdevs", 00:05:30.480 "bdev_lvol_set_parent_bdev", 00:05:30.480 "bdev_lvol_set_parent", 00:05:30.480 "bdev_lvol_check_shallow_copy", 00:05:30.480 "bdev_lvol_start_shallow_copy", 00:05:30.480 "bdev_lvol_grow_lvstore", 00:05:30.480 "bdev_lvol_get_lvols", 00:05:30.480 "bdev_lvol_get_lvstores", 00:05:30.480 "bdev_lvol_delete", 00:05:30.480 "bdev_lvol_set_read_only", 00:05:30.480 "bdev_lvol_resize", 00:05:30.480 "bdev_lvol_decouple_parent", 00:05:30.480 "bdev_lvol_inflate", 00:05:30.480 "bdev_lvol_rename", 00:05:30.480 "bdev_lvol_clone_bdev", 00:05:30.480 "bdev_lvol_clone", 00:05:30.480 "bdev_lvol_snapshot", 00:05:30.480 "bdev_lvol_create", 00:05:30.480 "bdev_lvol_delete_lvstore", 00:05:30.480 "bdev_lvol_rename_lvstore", 00:05:30.480 "bdev_lvol_create_lvstore", 00:05:30.480 "bdev_passthru_delete", 00:05:30.480 "bdev_passthru_create", 00:05:30.480 "bdev_nvme_cuse_unregister", 00:05:30.480 "bdev_nvme_cuse_register", 00:05:30.480 "bdev_opal_new_user", 00:05:30.480 "bdev_opal_set_lock_state", 00:05:30.480 "bdev_opal_delete", 00:05:30.480 "bdev_opal_get_info", 00:05:30.480 "bdev_opal_create", 00:05:30.480 "bdev_nvme_opal_revert", 00:05:30.480 "bdev_nvme_opal_init", 00:05:30.480 "bdev_nvme_send_cmd", 00:05:30.480 "bdev_nvme_get_path_iostat", 00:05:30.480 "bdev_nvme_get_mdns_discovery_info", 00:05:30.480 "bdev_nvme_stop_mdns_discovery", 00:05:30.480 "bdev_nvme_start_mdns_discovery", 00:05:30.480 "bdev_nvme_set_multipath_policy", 00:05:30.480 "bdev_nvme_set_preferred_path", 00:05:30.480 "bdev_nvme_get_io_paths", 00:05:30.480 "bdev_nvme_remove_error_injection", 00:05:30.480 "bdev_nvme_add_error_injection", 00:05:30.480 "bdev_nvme_get_discovery_info", 00:05:30.480 "bdev_nvme_stop_discovery", 00:05:30.480 "bdev_nvme_start_discovery", 00:05:30.480 "bdev_nvme_get_controller_health_info", 00:05:30.480 "bdev_nvme_disable_controller", 00:05:30.480 "bdev_nvme_enable_controller", 00:05:30.480 "bdev_nvme_reset_controller", 00:05:30.480 "bdev_nvme_get_transport_statistics", 00:05:30.480 "bdev_nvme_apply_firmware", 00:05:30.480 "bdev_nvme_detach_controller", 00:05:30.480 "bdev_nvme_get_controllers", 00:05:30.480 "bdev_nvme_attach_controller", 00:05:30.480 "bdev_nvme_set_hotplug", 00:05:30.480 "bdev_nvme_set_options", 00:05:30.480 "bdev_null_resize", 00:05:30.480 "bdev_null_delete", 00:05:30.480 "bdev_null_create", 00:05:30.480 "bdev_malloc_delete", 00:05:30.480 "bdev_malloc_create" 00:05:30.480 ] 00:05:30.480 20:55:57 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:30.480 20:55:57 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:30.480 20:55:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:30.480 20:55:57 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:30.480 20:55:57 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 763238 00:05:30.480 20:55:57 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 763238 ']' 00:05:30.480 20:55:57 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 763238 00:05:30.480 20:55:57 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:30.480 20:55:57 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:30.480 20:55:57 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 763238 00:05:30.480 20:55:57 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:30.480 20:55:57 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:30.480 20:55:57 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 763238' 00:05:30.480 killing process with pid 763238 00:05:30.481 20:55:57 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 763238 00:05:30.481 20:55:57 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 763238 00:05:30.740 00:05:30.740 real 0m1.516s 00:05:30.740 user 0m2.807s 00:05:30.740 sys 0m0.484s 00:05:30.740 20:55:57 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.740 20:55:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:30.740 ************************************ 00:05:30.740 END TEST spdkcli_tcp 00:05:30.740 ************************************ 00:05:30.740 20:55:58 -- common/autotest_common.sh@1142 -- # return 0 00:05:30.740 20:55:58 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:30.740 20:55:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.740 20:55:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.740 20:55:58 -- common/autotest_common.sh@10 -- # set +x 00:05:31.000 ************************************ 00:05:31.000 START TEST dpdk_mem_utility 00:05:31.000 ************************************ 00:05:31.000 20:55:58 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:31.000 * Looking for test storage... 00:05:31.000 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:05:31.000 20:55:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:31.000 20:55:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=763586 00:05:31.000 20:55:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 763586 00:05:31.000 20:55:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.000 20:55:58 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 763586 ']' 00:05:31.000 20:55:58 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.000 20:55:58 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.000 20:55:58 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.000 20:55:58 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.000 20:55:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:31.000 [2024-07-15 20:55:58.158155] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:31.000 [2024-07-15 20:55:58.158235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid763586 ] 00:05:31.000 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.000 [2024-07-15 20:55:58.226511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.260 [2024-07-15 20:55:58.298947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.830 20:55:58 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.831 20:55:58 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:31.831 20:55:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:31.831 20:55:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:31.831 20:55:58 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.831 20:55:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:31.831 { 00:05:31.831 "filename": "/tmp/spdk_mem_dump.txt" 00:05:31.831 } 00:05:31.831 20:55:58 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.831 20:55:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:31.831 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:31.831 1 heaps totaling size 814.000000 MiB 00:05:31.831 size: 814.000000 MiB heap id: 0 00:05:31.831 end heaps---------- 00:05:31.831 8 mempools totaling size 598.116089 MiB 00:05:31.831 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:31.831 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:31.831 size: 84.521057 MiB name: bdev_io_763586 00:05:31.831 size: 51.011292 MiB name: evtpool_763586 00:05:31.831 size: 50.003479 MiB name: msgpool_763586 00:05:31.831 size: 21.763794 MiB name: PDU_Pool 00:05:31.831 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:31.831 size: 0.026123 MiB name: Session_Pool 00:05:31.831 end mempools------- 00:05:31.831 6 memzones totaling size 4.142822 MiB 00:05:31.831 size: 1.000366 MiB name: RG_ring_0_763586 00:05:31.831 size: 1.000366 MiB name: RG_ring_1_763586 00:05:31.831 size: 1.000366 MiB name: RG_ring_4_763586 00:05:31.831 size: 1.000366 MiB name: RG_ring_5_763586 00:05:31.831 size: 0.125366 MiB name: RG_ring_2_763586 00:05:31.831 size: 0.015991 MiB name: RG_ring_3_763586 00:05:31.831 end memzones------- 00:05:31.831 20:55:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:31.831 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:31.831 list of free elements. size: 12.519348 MiB 00:05:31.831 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:31.831 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:31.831 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:31.831 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:31.831 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:31.831 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:31.831 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:31.831 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:31.831 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:31.831 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:31.831 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:31.831 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:31.831 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:31.831 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:31.831 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:31.831 list of standard malloc elements. size: 199.218079 MiB 00:05:31.831 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:31.831 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:31.831 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:31.831 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:31.831 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:31.831 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:31.831 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:31.831 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:31.831 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:31.831 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:31.831 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:31.831 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:31.831 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:31.831 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:31.831 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:31.831 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:31.831 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:31.831 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:31.831 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:31.831 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:31.831 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:31.831 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:31.831 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:31.831 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:31.831 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:31.831 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:31.831 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:31.831 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:31.831 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:31.831 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:31.831 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:31.831 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:31.831 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:31.831 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:31.831 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:31.831 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:31.831 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:31.831 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:31.831 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:31.831 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:31.831 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:31.831 list of memzone associated elements. size: 602.262573 MiB 00:05:31.831 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:31.831 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:31.831 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:31.831 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:31.831 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:31.831 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_763586_0 00:05:31.831 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:31.831 associated memzone info: size: 48.002930 MiB name: MP_evtpool_763586_0 00:05:31.831 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:31.831 associated memzone info: size: 48.002930 MiB name: MP_msgpool_763586_0 00:05:31.831 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:31.831 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:31.831 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:31.831 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:31.831 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:31.831 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_763586 00:05:31.831 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:31.831 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_763586 00:05:31.831 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:31.831 associated memzone info: size: 1.007996 MiB name: MP_evtpool_763586 00:05:31.831 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:31.831 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:31.831 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:31.831 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:31.831 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:31.831 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:31.831 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:31.831 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:31.831 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:31.831 associated memzone info: size: 1.000366 MiB name: RG_ring_0_763586 00:05:31.831 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:31.831 associated memzone info: size: 1.000366 MiB name: RG_ring_1_763586 00:05:31.831 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:31.831 associated memzone info: size: 1.000366 MiB name: RG_ring_4_763586 00:05:31.831 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:31.831 associated memzone info: size: 1.000366 MiB name: RG_ring_5_763586 00:05:31.831 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:31.831 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_763586 00:05:31.831 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:31.831 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:31.831 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:31.831 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:31.831 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:31.831 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:31.831 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:31.831 associated memzone info: size: 0.125366 MiB name: RG_ring_2_763586 00:05:31.831 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:31.831 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:31.831 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:31.831 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:31.831 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:31.831 associated memzone info: size: 0.015991 MiB name: RG_ring_3_763586 00:05:31.831 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:31.831 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:31.831 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:31.831 associated memzone info: size: 0.000183 MiB name: MP_msgpool_763586 00:05:31.831 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:31.831 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_763586 00:05:31.831 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:31.831 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:31.831 20:55:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:31.831 20:55:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 763586 00:05:31.831 20:55:59 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 763586 ']' 00:05:31.831 20:55:59 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 763586 00:05:31.831 20:55:59 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:31.831 20:55:59 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:31.832 20:55:59 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 763586 00:05:32.091 20:55:59 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:32.091 20:55:59 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:32.091 20:55:59 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 763586' 00:05:32.091 killing process with pid 763586 00:05:32.091 20:55:59 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 763586 00:05:32.091 20:55:59 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 763586 00:05:32.352 00:05:32.352 real 0m1.397s 00:05:32.352 user 0m1.444s 00:05:32.352 sys 0m0.437s 00:05:32.352 20:55:59 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.352 20:55:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:32.352 ************************************ 00:05:32.352 END TEST dpdk_mem_utility 00:05:32.352 ************************************ 00:05:32.352 20:55:59 -- common/autotest_common.sh@1142 -- # return 0 00:05:32.352 20:55:59 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:32.352 20:55:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.352 20:55:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.352 20:55:59 -- common/autotest_common.sh@10 -- # set +x 00:05:32.352 ************************************ 00:05:32.352 START TEST event 00:05:32.352 ************************************ 00:05:32.352 20:55:59 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:32.352 * Looking for test storage... 00:05:32.352 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:32.352 20:55:59 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:32.352 20:55:59 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:32.352 20:55:59 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:32.352 20:55:59 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:32.352 20:55:59 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.352 20:55:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.612 ************************************ 00:05:32.612 START TEST event_perf 00:05:32.612 ************************************ 00:05:32.612 20:55:59 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:32.612 Running I/O for 1 seconds...[2024-07-15 20:55:59.666576] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:32.612 [2024-07-15 20:55:59.666659] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid763904 ] 00:05:32.612 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.612 [2024-07-15 20:55:59.738828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:32.612 [2024-07-15 20:55:59.812245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.612 [2024-07-15 20:55:59.812340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.612 [2024-07-15 20:55:59.812433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:32.612 [2024-07-15 20:55:59.812435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.028 Running I/O for 1 seconds... 00:05:34.028 lcore 0: 202214 00:05:34.028 lcore 1: 202215 00:05:34.028 lcore 2: 202214 00:05:34.028 lcore 3: 202212 00:05:34.028 done. 00:05:34.028 00:05:34.028 real 0m1.233s 00:05:34.028 user 0m4.135s 00:05:34.028 sys 0m0.094s 00:05:34.028 20:56:00 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.028 20:56:00 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:34.028 ************************************ 00:05:34.028 END TEST event_perf 00:05:34.028 ************************************ 00:05:34.028 20:56:00 event -- common/autotest_common.sh@1142 -- # return 0 00:05:34.028 20:56:00 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:34.028 20:56:00 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:34.028 20:56:00 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.028 20:56:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.028 ************************************ 00:05:34.028 START TEST event_reactor 00:05:34.028 ************************************ 00:05:34.028 20:56:00 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:34.028 [2024-07-15 20:56:00.973815] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:34.028 [2024-07-15 20:56:00.973902] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid764190 ] 00:05:34.028 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.028 [2024-07-15 20:56:01.043046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.028 [2024-07-15 20:56:01.113863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.978 test_start 00:05:34.978 oneshot 00:05:34.978 tick 100 00:05:34.978 tick 100 00:05:34.978 tick 250 00:05:34.978 tick 100 00:05:34.978 tick 100 00:05:34.978 tick 100 00:05:34.978 tick 250 00:05:34.978 tick 500 00:05:34.978 tick 100 00:05:34.978 tick 100 00:05:34.978 tick 250 00:05:34.978 tick 100 00:05:34.978 tick 100 00:05:34.978 test_end 00:05:34.978 00:05:34.978 real 0m1.222s 00:05:34.978 user 0m1.137s 00:05:34.978 sys 0m0.080s 00:05:34.978 20:56:02 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.978 20:56:02 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:34.978 ************************************ 00:05:34.978 END TEST event_reactor 00:05:34.978 ************************************ 00:05:34.978 20:56:02 event -- common/autotest_common.sh@1142 -- # return 0 00:05:34.978 20:56:02 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:34.978 20:56:02 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:34.978 20:56:02 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.978 20:56:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.978 ************************************ 00:05:34.978 START TEST event_reactor_perf 00:05:34.978 ************************************ 00:05:34.978 20:56:02 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:35.238 [2024-07-15 20:56:02.273608] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:35.238 [2024-07-15 20:56:02.273723] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid764480 ] 00:05:35.238 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.238 [2024-07-15 20:56:02.343790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.238 [2024-07-15 20:56:02.413389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.619 test_start 00:05:36.619 test_end 00:05:36.619 Performance: 971082 events per second 00:05:36.619 00:05:36.619 real 0m1.220s 00:05:36.619 user 0m1.126s 00:05:36.619 sys 0m0.090s 00:05:36.619 20:56:03 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.619 20:56:03 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:36.619 ************************************ 00:05:36.619 END TEST event_reactor_perf 00:05:36.619 ************************************ 00:05:36.619 20:56:03 event -- common/autotest_common.sh@1142 -- # return 0 00:05:36.619 20:56:03 event -- event/event.sh@49 -- # uname -s 00:05:36.619 20:56:03 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:36.619 20:56:03 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:36.619 20:56:03 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.619 20:56:03 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.619 20:56:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.619 ************************************ 00:05:36.619 START TEST event_scheduler 00:05:36.619 ************************************ 00:05:36.619 20:56:03 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:36.619 * Looking for test storage... 00:05:36.619 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:05:36.619 20:56:03 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:36.619 20:56:03 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=764791 00:05:36.619 20:56:03 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:36.619 20:56:03 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.619 20:56:03 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 764791 00:05:36.619 20:56:03 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 764791 ']' 00:05:36.619 20:56:03 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.619 20:56:03 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.619 20:56:03 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.619 20:56:03 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.619 20:56:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:36.620 [2024-07-15 20:56:03.686361] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:36.620 [2024-07-15 20:56:03.686464] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid764791 ] 00:05:36.620 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.620 [2024-07-15 20:56:03.755475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:36.620 [2024-07-15 20:56:03.830822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.620 [2024-07-15 20:56:03.830916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.620 [2024-07-15 20:56:03.830978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:36.620 [2024-07-15 20:56:03.830979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:37.559 20:56:04 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.559 20:56:04 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:37.559 20:56:04 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:37.559 20:56:04 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.559 20:56:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.559 [2024-07-15 20:56:04.509308] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:37.559 [2024-07-15 20:56:04.509330] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:37.559 [2024-07-15 20:56:04.509344] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:37.559 [2024-07-15 20:56:04.509352] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:37.559 [2024-07-15 20:56:04.509359] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:37.559 20:56:04 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.559 20:56:04 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:37.559 20:56:04 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.559 20:56:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.559 [2024-07-15 20:56:04.580149] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:37.559 20:56:04 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.559 20:56:04 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:37.559 20:56:04 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.559 20:56:04 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.559 20:56:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.559 ************************************ 00:05:37.559 START TEST scheduler_create_thread 00:05:37.559 ************************************ 00:05:37.559 20:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:37.559 20:56:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:37.559 20:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.559 20:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.559 2 00:05:37.559 20:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.559 20:56:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:37.559 20:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.559 20:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.559 3 00:05:37.559 20:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.559 20:56:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:37.559 20:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.559 20:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.559 4 00:05:37.559 20:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.559 20:56:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:37.559 20:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.559 20:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.559 5 00:05:37.559 20:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.559 20:56:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:37.559 20:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.559 20:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.559 6 00:05:37.559 20:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.559 20:56:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:37.559 20:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.559 20:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.559 7 00:05:37.559 20:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.560 20:56:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:37.560 20:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.560 20:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.560 8 00:05:37.560 20:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.560 20:56:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:37.560 20:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.560 20:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.560 9 00:05:37.560 20:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.560 20:56:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:37.560 20:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.560 20:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.560 10 00:05:37.560 20:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.560 20:56:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:37.560 20:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.560 20:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.560 20:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.560 20:56:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:37.560 20:56:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:37.560 20:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.560 20:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.129 20:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.129 20:56:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:38.129 20:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.129 20:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.509 20:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.509 20:56:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:39.509 20:56:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:39.509 20:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.509 20:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.448 20:56:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.448 00:05:40.448 real 0m3.103s 00:05:40.448 user 0m0.021s 00:05:40.448 sys 0m0.009s 00:05:40.448 20:56:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.448 20:56:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.448 ************************************ 00:05:40.448 END TEST scheduler_create_thread 00:05:40.448 ************************************ 00:05:40.708 20:56:07 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:40.708 20:56:07 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:40.708 20:56:07 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 764791 00:05:40.708 20:56:07 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 764791 ']' 00:05:40.708 20:56:07 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 764791 00:05:40.708 20:56:07 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:40.708 20:56:07 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:40.708 20:56:07 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 764791 00:05:40.708 20:56:07 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:40.708 20:56:07 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:40.708 20:56:07 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 764791' 00:05:40.708 killing process with pid 764791 00:05:40.708 20:56:07 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 764791 00:05:40.708 20:56:07 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 764791 00:05:40.967 [2024-07-15 20:56:08.103378] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:41.227 00:05:41.227 real 0m4.750s 00:05:41.227 user 0m9.223s 00:05:41.227 sys 0m0.416s 00:05:41.227 20:56:08 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.227 20:56:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:41.227 ************************************ 00:05:41.227 END TEST event_scheduler 00:05:41.227 ************************************ 00:05:41.227 20:56:08 event -- common/autotest_common.sh@1142 -- # return 0 00:05:41.227 20:56:08 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:41.227 20:56:08 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:41.227 20:56:08 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.227 20:56:08 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.227 20:56:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.227 ************************************ 00:05:41.227 START TEST app_repeat 00:05:41.227 ************************************ 00:05:41.227 20:56:08 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:41.227 20:56:08 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.227 20:56:08 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.227 20:56:08 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:41.227 20:56:08 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.227 20:56:08 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:41.227 20:56:08 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:41.227 20:56:08 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:41.227 20:56:08 event.app_repeat -- event/event.sh@19 -- # repeat_pid=765647 00:05:41.227 20:56:08 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.227 20:56:08 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:41.227 20:56:08 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 765647' 00:05:41.227 Process app_repeat pid: 765647 00:05:41.227 20:56:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:41.227 20:56:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:41.227 spdk_app_start Round 0 00:05:41.227 20:56:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 765647 /var/tmp/spdk-nbd.sock 00:05:41.227 20:56:08 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 765647 ']' 00:05:41.227 20:56:08 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:41.227 20:56:08 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.227 20:56:08 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:41.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:41.227 20:56:08 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.227 20:56:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:41.227 [2024-07-15 20:56:08.425681] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:41.227 [2024-07-15 20:56:08.425750] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid765647 ] 00:05:41.227 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.227 [2024-07-15 20:56:08.497022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.486 [2024-07-15 20:56:08.569839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.486 [2024-07-15 20:56:08.569841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.056 20:56:09 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.056 20:56:09 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:42.056 20:56:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.315 Malloc0 00:05:42.315 20:56:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.315 Malloc1 00:05:42.574 20:56:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:42.574 20:56:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.574 20:56:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.574 20:56:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:42.574 20:56:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.574 20:56:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:42.574 20:56:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:42.574 20:56:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.574 20:56:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.574 20:56:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:42.574 20:56:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.574 20:56:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:42.574 20:56:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:42.574 20:56:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:42.574 20:56:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.574 20:56:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:42.574 /dev/nbd0 00:05:42.574 20:56:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:42.574 20:56:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:42.574 20:56:09 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:42.574 20:56:09 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:42.574 20:56:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:42.574 20:56:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:42.574 20:56:09 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:42.574 20:56:09 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:42.574 20:56:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:42.574 20:56:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:42.574 20:56:09 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.574 1+0 records in 00:05:42.574 1+0 records out 00:05:42.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226138 s, 18.1 MB/s 00:05:42.574 20:56:09 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:42.574 20:56:09 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:42.574 20:56:09 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:42.574 20:56:09 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:42.574 20:56:09 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:42.574 20:56:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.574 20:56:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.574 20:56:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:42.833 /dev/nbd1 00:05:42.833 20:56:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:42.833 20:56:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:42.833 20:56:10 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:42.833 20:56:10 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:42.833 20:56:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:42.833 20:56:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:42.833 20:56:10 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:42.833 20:56:10 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:42.833 20:56:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:42.833 20:56:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:42.834 20:56:10 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.834 1+0 records in 00:05:42.834 1+0 records out 00:05:42.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276596 s, 14.8 MB/s 00:05:42.834 20:56:10 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:42.834 20:56:10 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:42.834 20:56:10 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:42.834 20:56:10 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:42.834 20:56:10 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:42.834 20:56:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.834 20:56:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.834 20:56:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.834 20:56:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.834 20:56:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:43.093 { 00:05:43.093 "nbd_device": "/dev/nbd0", 00:05:43.093 "bdev_name": "Malloc0" 00:05:43.093 }, 00:05:43.093 { 00:05:43.093 "nbd_device": "/dev/nbd1", 00:05:43.093 "bdev_name": "Malloc1" 00:05:43.093 } 00:05:43.093 ]' 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:43.093 { 00:05:43.093 "nbd_device": "/dev/nbd0", 00:05:43.093 "bdev_name": "Malloc0" 00:05:43.093 }, 00:05:43.093 { 00:05:43.093 "nbd_device": "/dev/nbd1", 00:05:43.093 "bdev_name": "Malloc1" 00:05:43.093 } 00:05:43.093 ]' 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:43.093 /dev/nbd1' 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:43.093 /dev/nbd1' 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:43.093 256+0 records in 00:05:43.093 256+0 records out 00:05:43.093 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109036 s, 96.2 MB/s 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:43.093 256+0 records in 00:05:43.093 256+0 records out 00:05:43.093 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204804 s, 51.2 MB/s 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:43.093 256+0 records in 00:05:43.093 256+0 records out 00:05:43.093 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218637 s, 48.0 MB/s 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.093 20:56:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:43.352 20:56:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:43.352 20:56:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:43.352 20:56:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:43.352 20:56:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.352 20:56:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.352 20:56:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:43.352 20:56:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:43.352 20:56:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.352 20:56:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.352 20:56:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:43.610 20:56:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:43.610 20:56:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:43.610 20:56:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:43.610 20:56:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.610 20:56:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.610 20:56:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:43.610 20:56:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:43.610 20:56:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.610 20:56:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.610 20:56:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.610 20:56:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.868 20:56:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:43.868 20:56:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:43.868 20:56:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.868 20:56:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:43.868 20:56:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:43.868 20:56:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.868 20:56:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:43.868 20:56:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:43.868 20:56:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:43.868 20:56:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:43.868 20:56:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:43.868 20:56:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:43.868 20:56:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:44.127 20:56:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:44.127 [2024-07-15 20:56:11.346961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.127 [2024-07-15 20:56:11.418162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.127 [2024-07-15 20:56:11.418165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.386 [2024-07-15 20:56:11.457493] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:44.386 [2024-07-15 20:56:11.457536] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:46.917 20:56:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:46.917 20:56:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:46.917 spdk_app_start Round 1 00:05:46.917 20:56:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 765647 /var/tmp/spdk-nbd.sock 00:05:46.917 20:56:14 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 765647 ']' 00:05:46.917 20:56:14 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:46.917 20:56:14 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.917 20:56:14 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:46.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:46.917 20:56:14 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.918 20:56:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:47.213 20:56:14 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.213 20:56:14 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:47.213 20:56:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:47.472 Malloc0 00:05:47.472 20:56:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:47.472 Malloc1 00:05:47.472 20:56:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:47.472 20:56:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.472 20:56:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.472 20:56:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:47.472 20:56:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.472 20:56:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:47.472 20:56:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:47.472 20:56:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.472 20:56:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.472 20:56:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:47.472 20:56:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.472 20:56:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:47.472 20:56:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:47.472 20:56:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:47.472 20:56:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.472 20:56:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:47.731 /dev/nbd0 00:05:47.731 20:56:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:47.731 20:56:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:47.731 20:56:14 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:47.731 20:56:14 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:47.731 20:56:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:47.731 20:56:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:47.731 20:56:14 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:47.731 20:56:14 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:47.731 20:56:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:47.731 20:56:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:47.731 20:56:14 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:47.731 1+0 records in 00:05:47.731 1+0 records out 00:05:47.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252057 s, 16.3 MB/s 00:05:47.731 20:56:14 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:47.731 20:56:14 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:47.731 20:56:14 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:47.731 20:56:14 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:47.731 20:56:14 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:47.731 20:56:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.731 20:56:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.731 20:56:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:47.989 /dev/nbd1 00:05:47.989 20:56:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:47.989 20:56:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:47.989 20:56:15 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:47.989 20:56:15 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:47.989 20:56:15 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:47.989 20:56:15 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:47.989 20:56:15 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:47.989 20:56:15 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:47.989 20:56:15 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:47.989 20:56:15 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:47.989 20:56:15 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:47.989 1+0 records in 00:05:47.989 1+0 records out 00:05:47.989 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240998 s, 17.0 MB/s 00:05:47.989 20:56:15 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:47.989 20:56:15 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:47.989 20:56:15 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:47.989 20:56:15 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:47.989 20:56:15 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:47.989 20:56:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.989 20:56:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.989 20:56:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.989 20:56:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.989 20:56:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.989 20:56:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:47.989 { 00:05:47.989 "nbd_device": "/dev/nbd0", 00:05:47.989 "bdev_name": "Malloc0" 00:05:47.989 }, 00:05:47.989 { 00:05:47.989 "nbd_device": "/dev/nbd1", 00:05:47.989 "bdev_name": "Malloc1" 00:05:47.989 } 00:05:47.989 ]' 00:05:47.989 20:56:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:47.989 { 00:05:47.989 "nbd_device": "/dev/nbd0", 00:05:47.989 "bdev_name": "Malloc0" 00:05:47.989 }, 00:05:47.989 { 00:05:47.989 "nbd_device": "/dev/nbd1", 00:05:47.989 "bdev_name": "Malloc1" 00:05:47.989 } 00:05:47.989 ]' 00:05:47.989 20:56:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:48.248 /dev/nbd1' 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:48.248 /dev/nbd1' 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:48.248 256+0 records in 00:05:48.248 256+0 records out 00:05:48.248 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113807 s, 92.1 MB/s 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:48.248 256+0 records in 00:05:48.248 256+0 records out 00:05:48.248 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200703 s, 52.2 MB/s 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:48.248 256+0 records in 00:05:48.248 256+0 records out 00:05:48.248 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219463 s, 47.8 MB/s 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.248 20:56:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:48.507 20:56:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:48.507 20:56:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:48.507 20:56:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:48.507 20:56:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:48.507 20:56:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:48.507 20:56:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:48.507 20:56:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:48.507 20:56:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:48.507 20:56:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.507 20:56:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:48.507 20:56:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:48.507 20:56:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:48.507 20:56:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:48.507 20:56:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:48.507 20:56:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:48.507 20:56:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:48.507 20:56:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:48.507 20:56:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:48.767 20:56:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.767 20:56:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.767 20:56:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.767 20:56:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:48.767 20:56:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:48.767 20:56:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.767 20:56:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:48.767 20:56:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:48.767 20:56:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.767 20:56:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:48.767 20:56:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:48.767 20:56:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:48.767 20:56:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:48.767 20:56:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:48.767 20:56:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:48.767 20:56:16 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:49.026 20:56:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:49.285 [2024-07-15 20:56:16.396957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.285 [2024-07-15 20:56:16.461495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.285 [2024-07-15 20:56:16.461497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.285 [2024-07-15 20:56:16.501640] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:49.285 [2024-07-15 20:56:16.501685] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:52.570 20:56:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:52.570 20:56:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:52.570 spdk_app_start Round 2 00:05:52.570 20:56:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 765647 /var/tmp/spdk-nbd.sock 00:05:52.570 20:56:19 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 765647 ']' 00:05:52.570 20:56:19 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:52.570 20:56:19 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.570 20:56:19 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:52.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:52.570 20:56:19 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.570 20:56:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:52.570 20:56:19 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.570 20:56:19 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:52.570 20:56:19 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:52.570 Malloc0 00:05:52.570 20:56:19 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:52.570 Malloc1 00:05:52.570 20:56:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.570 20:56:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.570 20:56:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.570 20:56:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:52.570 20:56:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.570 20:56:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:52.570 20:56:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.570 20:56:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.570 20:56:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.570 20:56:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:52.570 20:56:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.570 20:56:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:52.570 20:56:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:52.570 20:56:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:52.570 20:56:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.570 20:56:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:52.830 /dev/nbd0 00:05:52.830 20:56:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:52.830 20:56:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:52.830 20:56:19 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:52.830 20:56:19 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:52.830 20:56:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:52.830 20:56:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:52.830 20:56:19 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:52.830 20:56:19 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:52.830 20:56:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:52.830 20:56:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:52.830 20:56:19 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.830 1+0 records in 00:05:52.830 1+0 records out 00:05:52.830 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000174038 s, 23.5 MB/s 00:05:52.830 20:56:19 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:52.830 20:56:19 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:52.830 20:56:19 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:52.830 20:56:19 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:52.830 20:56:19 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:52.830 20:56:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.830 20:56:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.830 20:56:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:52.830 /dev/nbd1 00:05:52.830 20:56:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:52.830 20:56:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:52.830 20:56:20 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:52.830 20:56:20 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:52.830 20:56:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:52.830 20:56:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:52.830 20:56:20 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:52.830 20:56:20 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:52.830 20:56:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:52.830 20:56:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:52.830 20:56:20 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.830 1+0 records in 00:05:52.830 1+0 records out 00:05:52.830 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243047 s, 16.9 MB/s 00:05:52.830 20:56:20 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:52.830 20:56:20 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:52.830 20:56:20 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:52.830 20:56:20 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:52.830 20:56:20 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:52.830 20:56:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.830 20:56:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.089 20:56:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.089 20:56:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.089 20:56:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.089 20:56:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:53.089 { 00:05:53.089 "nbd_device": "/dev/nbd0", 00:05:53.089 "bdev_name": "Malloc0" 00:05:53.089 }, 00:05:53.089 { 00:05:53.089 "nbd_device": "/dev/nbd1", 00:05:53.089 "bdev_name": "Malloc1" 00:05:53.089 } 00:05:53.089 ]' 00:05:53.089 20:56:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:53.089 { 00:05:53.089 "nbd_device": "/dev/nbd0", 00:05:53.089 "bdev_name": "Malloc0" 00:05:53.089 }, 00:05:53.089 { 00:05:53.089 "nbd_device": "/dev/nbd1", 00:05:53.089 "bdev_name": "Malloc1" 00:05:53.089 } 00:05:53.089 ]' 00:05:53.089 20:56:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.089 20:56:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:53.089 /dev/nbd1' 00:05:53.089 20:56:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:53.089 /dev/nbd1' 00:05:53.089 20:56:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.089 20:56:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:53.089 20:56:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:53.089 20:56:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:53.089 20:56:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:53.089 20:56:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:53.089 20:56:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.089 20:56:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.089 20:56:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:53.089 20:56:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:53.089 20:56:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:53.089 20:56:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:53.089 256+0 records in 00:05:53.089 256+0 records out 00:05:53.089 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100933 s, 104 MB/s 00:05:53.089 20:56:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.089 20:56:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:53.348 256+0 records in 00:05:53.348 256+0 records out 00:05:53.348 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205073 s, 51.1 MB/s 00:05:53.348 20:56:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.348 20:56:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:53.348 256+0 records in 00:05:53.348 256+0 records out 00:05:53.348 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218916 s, 47.9 MB/s 00:05:53.348 20:56:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:53.348 20:56:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.348 20:56:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.348 20:56:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:53.348 20:56:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:53.348 20:56:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:53.348 20:56:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:53.348 20:56:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.348 20:56:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:53.348 20:56:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.348 20:56:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:53.348 20:56:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:53.348 20:56:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:53.348 20:56:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.348 20:56:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.348 20:56:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:53.348 20:56:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:53.348 20:56:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.348 20:56:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:53.348 20:56:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:53.348 20:56:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:53.348 20:56:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:53.348 20:56:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.348 20:56:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.348 20:56:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:53.348 20:56:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.348 20:56:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.348 20:56:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.348 20:56:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:53.607 20:56:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:53.607 20:56:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:53.607 20:56:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:53.607 20:56:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.607 20:56:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.607 20:56:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:53.607 20:56:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.607 20:56:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.607 20:56:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.607 20:56:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.607 20:56:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.866 20:56:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:53.866 20:56:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:53.866 20:56:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.866 20:56:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:53.866 20:56:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.866 20:56:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:53.866 20:56:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:53.866 20:56:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:53.866 20:56:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:53.866 20:56:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:53.866 20:56:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:53.866 20:56:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:53.866 20:56:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:54.124 20:56:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:54.383 [2024-07-15 20:56:21.420534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.383 [2024-07-15 20:56:21.488542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.383 [2024-07-15 20:56:21.488544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.383 [2024-07-15 20:56:21.528615] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:54.383 [2024-07-15 20:56:21.528657] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:57.672 20:56:24 event.app_repeat -- event/event.sh@38 -- # waitforlisten 765647 /var/tmp/spdk-nbd.sock 00:05:57.672 20:56:24 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 765647 ']' 00:05:57.672 20:56:24 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:57.672 20:56:24 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.672 20:56:24 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:57.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:57.672 20:56:24 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.672 20:56:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:57.672 20:56:24 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.672 20:56:24 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:57.672 20:56:24 event.app_repeat -- event/event.sh@39 -- # killprocess 765647 00:05:57.672 20:56:24 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 765647 ']' 00:05:57.672 20:56:24 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 765647 00:05:57.672 20:56:24 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:57.672 20:56:24 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:57.672 20:56:24 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 765647 00:05:57.672 20:56:24 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:57.672 20:56:24 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:57.672 20:56:24 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 765647' 00:05:57.672 killing process with pid 765647 00:05:57.672 20:56:24 event.app_repeat -- common/autotest_common.sh@967 -- # kill 765647 00:05:57.672 20:56:24 event.app_repeat -- common/autotest_common.sh@972 -- # wait 765647 00:05:57.672 spdk_app_start is called in Round 0. 00:05:57.672 Shutdown signal received, stop current app iteration 00:05:57.672 Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 reinitialization... 00:05:57.672 spdk_app_start is called in Round 1. 00:05:57.672 Shutdown signal received, stop current app iteration 00:05:57.672 Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 reinitialization... 00:05:57.672 spdk_app_start is called in Round 2. 00:05:57.672 Shutdown signal received, stop current app iteration 00:05:57.672 Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 reinitialization... 00:05:57.672 spdk_app_start is called in Round 3. 00:05:57.672 Shutdown signal received, stop current app iteration 00:05:57.672 20:56:24 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:57.672 20:56:24 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:57.672 00:05:57.672 real 0m16.222s 00:05:57.672 user 0m34.332s 00:05:57.672 sys 0m3.110s 00:05:57.672 20:56:24 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.672 20:56:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:57.672 ************************************ 00:05:57.672 END TEST app_repeat 00:05:57.672 ************************************ 00:05:57.672 20:56:24 event -- common/autotest_common.sh@1142 -- # return 0 00:05:57.672 20:56:24 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:57.672 20:56:24 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:57.672 20:56:24 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.672 20:56:24 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.672 20:56:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:57.672 ************************************ 00:05:57.672 START TEST cpu_locks 00:05:57.672 ************************************ 00:05:57.672 20:56:24 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:57.672 * Looking for test storage... 00:05:57.672 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:57.672 20:56:24 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:57.672 20:56:24 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:57.672 20:56:24 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:57.672 20:56:24 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:57.672 20:56:24 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.672 20:56:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.672 20:56:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.672 ************************************ 00:05:57.672 START TEST default_locks 00:05:57.672 ************************************ 00:05:57.672 20:56:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:57.672 20:56:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=768573 00:05:57.672 20:56:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 768573 00:05:57.672 20:56:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.672 20:56:24 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 768573 ']' 00:05:57.672 20:56:24 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.672 20:56:24 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.672 20:56:24 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.672 20:56:24 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.672 20:56:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.672 [2024-07-15 20:56:24.855063] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:57.672 [2024-07-15 20:56:24.855127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid768573 ] 00:05:57.672 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.672 [2024-07-15 20:56:24.924240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.931 [2024-07-15 20:56:25.002172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.500 20:56:25 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.500 20:56:25 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:58.500 20:56:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 768573 00:05:58.500 20:56:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 768573 00:05:58.500 20:56:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.759 lslocks: write error 00:05:58.759 20:56:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 768573 00:05:58.759 20:56:25 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 768573 ']' 00:05:58.759 20:56:25 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 768573 00:05:58.759 20:56:25 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:58.759 20:56:25 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:58.759 20:56:25 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 768573 00:05:58.759 20:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:58.759 20:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:58.759 20:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 768573' 00:05:58.759 killing process with pid 768573 00:05:58.759 20:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 768573 00:05:58.759 20:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 768573 00:05:59.019 20:56:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 768573 00:05:59.019 20:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:59.019 20:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 768573 00:05:59.019 20:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:59.019 20:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.019 20:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:59.019 20:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.019 20:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 768573 00:05:59.019 20:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 768573 ']' 00:05:59.019 20:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.019 20:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.019 20:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.279 20:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.279 20:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.279 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (768573) - No such process 00:05:59.279 ERROR: process (pid: 768573) is no longer running 00:05:59.279 20:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.279 20:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:59.279 20:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:59.279 20:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:59.279 20:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:59.279 20:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:59.279 20:56:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:59.279 20:56:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:59.279 20:56:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:59.279 20:56:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:59.279 00:05:59.279 real 0m1.482s 00:05:59.279 user 0m1.545s 00:05:59.279 sys 0m0.509s 00:05:59.279 20:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.279 20:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.279 ************************************ 00:05:59.279 END TEST default_locks 00:05:59.279 ************************************ 00:05:59.279 20:56:26 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:59.279 20:56:26 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:59.279 20:56:26 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.279 20:56:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.279 20:56:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.279 ************************************ 00:05:59.279 START TEST default_locks_via_rpc 00:05:59.279 ************************************ 00:05:59.279 20:56:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:59.279 20:56:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=768895 00:05:59.279 20:56:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 768895 00:05:59.279 20:56:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.279 20:56:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 768895 ']' 00:05:59.279 20:56:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.279 20:56:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.279 20:56:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.279 20:56:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.279 20:56:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.279 [2024-07-15 20:56:26.422480] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:59.279 [2024-07-15 20:56:26.422548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid768895 ] 00:05:59.279 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.279 [2024-07-15 20:56:26.491271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.279 [2024-07-15 20:56:26.568789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.217 20:56:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.218 20:56:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:00.218 20:56:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:00.218 20:56:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.218 20:56:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.218 20:56:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.218 20:56:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:00.218 20:56:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:00.218 20:56:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:00.218 20:56:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:00.218 20:56:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:00.218 20:56:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.218 20:56:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.218 20:56:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.218 20:56:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 768895 00:06:00.218 20:56:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.218 20:56:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 768895 00:06:00.786 20:56:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 768895 00:06:00.786 20:56:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 768895 ']' 00:06:00.786 20:56:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 768895 00:06:00.786 20:56:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:00.786 20:56:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:00.786 20:56:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 768895 00:06:00.786 20:56:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:00.786 20:56:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:00.786 20:56:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 768895' 00:06:00.786 killing process with pid 768895 00:06:00.786 20:56:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 768895 00:06:00.786 20:56:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 768895 00:06:01.045 00:06:01.045 real 0m1.748s 00:06:01.045 user 0m1.831s 00:06:01.045 sys 0m0.607s 00:06:01.045 20:56:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.045 20:56:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.045 ************************************ 00:06:01.045 END TEST default_locks_via_rpc 00:06:01.045 ************************************ 00:06:01.045 20:56:28 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:01.045 20:56:28 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:01.045 20:56:28 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.045 20:56:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.045 20:56:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.045 ************************************ 00:06:01.045 START TEST non_locking_app_on_locked_coremask 00:06:01.045 ************************************ 00:06:01.045 20:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:01.045 20:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=769329 00:06:01.045 20:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 769329 /var/tmp/spdk.sock 00:06:01.045 20:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.045 20:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 769329 ']' 00:06:01.045 20:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.045 20:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.045 20:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.045 20:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.045 20:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.045 [2024-07-15 20:56:28.252165] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:01.045 [2024-07-15 20:56:28.252229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid769329 ] 00:06:01.045 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.045 [2024-07-15 20:56:28.320453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.369 [2024-07-15 20:56:28.401000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.938 20:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.938 20:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:01.938 20:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=769419 00:06:01.938 20:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 769419 /var/tmp/spdk2.sock 00:06:01.938 20:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:01.938 20:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 769419 ']' 00:06:01.938 20:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.938 20:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.938 20:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.938 20:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.938 20:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.938 [2024-07-15 20:56:29.098376] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:01.938 [2024-07-15 20:56:29.098476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid769419 ] 00:06:01.938 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.938 [2024-07-15 20:56:29.188074] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:01.938 [2024-07-15 20:56:29.188095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.197 [2024-07-15 20:56:29.331169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.766 20:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.766 20:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:02.766 20:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 769329 00:06:02.766 20:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 769329 00:06:02.766 20:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.143 lslocks: write error 00:06:04.143 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 769329 00:06:04.143 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 769329 ']' 00:06:04.143 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 769329 00:06:04.143 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:04.143 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:04.143 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 769329 00:06:04.143 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:04.143 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:04.143 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 769329' 00:06:04.143 killing process with pid 769329 00:06:04.143 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 769329 00:06:04.143 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 769329 00:06:04.710 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 769419 00:06:04.711 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 769419 ']' 00:06:04.711 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 769419 00:06:04.711 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:04.711 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:04.711 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 769419 00:06:04.711 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:04.711 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:04.711 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 769419' 00:06:04.711 killing process with pid 769419 00:06:04.711 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 769419 00:06:04.711 20:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 769419 00:06:04.970 00:06:04.970 real 0m3.927s 00:06:04.970 user 0m4.187s 00:06:04.970 sys 0m1.363s 00:06:04.970 20:56:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.970 20:56:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.970 ************************************ 00:06:04.970 END TEST non_locking_app_on_locked_coremask 00:06:04.970 ************************************ 00:06:04.970 20:56:32 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:04.970 20:56:32 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:04.970 20:56:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.970 20:56:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.970 20:56:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.970 ************************************ 00:06:04.970 START TEST locking_app_on_unlocked_coremask 00:06:04.970 ************************************ 00:06:04.970 20:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:04.970 20:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=769990 00:06:04.970 20:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 769990 /var/tmp/spdk.sock 00:06:04.970 20:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:04.970 20:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 769990 ']' 00:06:04.970 20:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.970 20:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.970 20:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.970 20:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.970 20:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.228 [2024-07-15 20:56:32.262169] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:05.228 [2024-07-15 20:56:32.262250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid769990 ] 00:06:05.228 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.228 [2024-07-15 20:56:32.330509] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:05.228 [2024-07-15 20:56:32.330532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.228 [2024-07-15 20:56:32.407640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.794 20:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.794 20:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:05.794 20:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=770249 00:06:05.794 20:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 770249 /var/tmp/spdk2.sock 00:06:05.794 20:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:05.794 20:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 770249 ']' 00:06:05.794 20:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.794 20:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.794 20:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.794 20:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.794 20:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.052 [2024-07-15 20:56:33.103015] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:06.052 [2024-07-15 20:56:33.103080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid770249 ] 00:06:06.052 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.052 [2024-07-15 20:56:33.195780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.310 [2024-07-15 20:56:33.344602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.877 20:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.877 20:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:06.877 20:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 770249 00:06:06.877 20:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.877 20:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 770249 00:06:07.812 lslocks: write error 00:06:07.812 20:56:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 769990 00:06:07.812 20:56:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 769990 ']' 00:06:07.812 20:56:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 769990 00:06:07.812 20:56:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:07.812 20:56:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:07.812 20:56:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 769990 00:06:07.812 20:56:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:07.812 20:56:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:07.812 20:56:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 769990' 00:06:07.812 killing process with pid 769990 00:06:07.812 20:56:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 769990 00:06:07.812 20:56:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 769990 00:06:08.379 20:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 770249 00:06:08.379 20:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 770249 ']' 00:06:08.379 20:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 770249 00:06:08.379 20:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:08.379 20:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:08.379 20:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 770249 00:06:08.379 20:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:08.379 20:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:08.379 20:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 770249' 00:06:08.379 killing process with pid 770249 00:06:08.379 20:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 770249 00:06:08.379 20:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 770249 00:06:08.638 00:06:08.638 real 0m3.557s 00:06:08.638 user 0m3.792s 00:06:08.638 sys 0m1.125s 00:06:08.638 20:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.638 20:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.638 ************************************ 00:06:08.638 END TEST locking_app_on_unlocked_coremask 00:06:08.638 ************************************ 00:06:08.638 20:56:35 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:08.638 20:56:35 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:08.638 20:56:35 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.638 20:56:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.638 20:56:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.638 ************************************ 00:06:08.638 START TEST locking_app_on_locked_coremask 00:06:08.638 ************************************ 00:06:08.638 20:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:08.638 20:56:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=770698 00:06:08.638 20:56:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 770698 /var/tmp/spdk.sock 00:06:08.639 20:56:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.639 20:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 770698 ']' 00:06:08.639 20:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.639 20:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.639 20:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.639 20:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.639 20:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.639 [2024-07-15 20:56:35.896439] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:08.639 [2024-07-15 20:56:35.896503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid770698 ] 00:06:08.639 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.898 [2024-07-15 20:56:35.964857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.898 [2024-07-15 20:56:36.041179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.464 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.464 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:09.464 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:09.464 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=770829 00:06:09.464 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 770829 /var/tmp/spdk2.sock 00:06:09.464 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:09.464 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 770829 /var/tmp/spdk2.sock 00:06:09.464 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:09.464 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.464 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:09.464 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.464 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 770829 /var/tmp/spdk2.sock 00:06:09.464 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 770829 ']' 00:06:09.464 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.464 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.465 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.465 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.465 20:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.465 [2024-07-15 20:56:36.725619] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:09.465 [2024-07-15 20:56:36.725664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid770829 ] 00:06:09.724 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.724 [2024-07-15 20:56:36.818942] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 770698 has claimed it. 00:06:09.724 [2024-07-15 20:56:36.818981] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:10.292 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (770829) - No such process 00:06:10.292 ERROR: process (pid: 770829) is no longer running 00:06:10.292 20:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.292 20:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:10.292 20:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:10.293 20:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:10.293 20:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:10.293 20:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:10.293 20:56:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 770698 00:06:10.293 20:56:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 770698 00:06:10.293 20:56:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.862 lslocks: write error 00:06:10.862 20:56:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 770698 00:06:10.862 20:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 770698 ']' 00:06:10.862 20:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 770698 00:06:10.862 20:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:10.862 20:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:10.862 20:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 770698 00:06:11.127 20:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:11.127 20:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:11.127 20:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 770698' 00:06:11.127 killing process with pid 770698 00:06:11.127 20:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 770698 00:06:11.127 20:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 770698 00:06:11.385 00:06:11.385 real 0m2.604s 00:06:11.385 user 0m2.831s 00:06:11.385 sys 0m0.803s 00:06:11.385 20:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.385 20:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.385 ************************************ 00:06:11.385 END TEST locking_app_on_locked_coremask 00:06:11.385 ************************************ 00:06:11.385 20:56:38 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:11.385 20:56:38 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:11.385 20:56:38 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.385 20:56:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.385 20:56:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.385 ************************************ 00:06:11.385 START TEST locking_overlapped_coremask 00:06:11.385 ************************************ 00:06:11.385 20:56:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:11.385 20:56:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=771129 00:06:11.385 20:56:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 771129 /var/tmp/spdk.sock 00:06:11.385 20:56:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:11.385 20:56:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 771129 ']' 00:06:11.385 20:56:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.385 20:56:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.385 20:56:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.385 20:56:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.385 20:56:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.385 [2024-07-15 20:56:38.579473] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:11.385 [2024-07-15 20:56:38.579531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid771129 ] 00:06:11.385 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.385 [2024-07-15 20:56:38.645914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:11.648 [2024-07-15 20:56:38.721861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.648 [2024-07-15 20:56:38.721956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.648 [2024-07-15 20:56:38.721958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.216 20:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.216 20:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:12.216 20:56:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=771394 00:06:12.216 20:56:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 771394 /var/tmp/spdk2.sock 00:06:12.216 20:56:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:12.216 20:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:12.216 20:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 771394 /var/tmp/spdk2.sock 00:06:12.216 20:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:12.216 20:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.216 20:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:12.216 20:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.216 20:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 771394 /var/tmp/spdk2.sock 00:06:12.216 20:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 771394 ']' 00:06:12.216 20:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.216 20:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.216 20:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.216 20:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.216 20:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.216 [2024-07-15 20:56:39.416437] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:12.217 [2024-07-15 20:56:39.416530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid771394 ] 00:06:12.217 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.475 [2024-07-15 20:56:39.512885] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 771129 has claimed it. 00:06:12.475 [2024-07-15 20:56:39.512926] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:13.044 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (771394) - No such process 00:06:13.044 ERROR: process (pid: 771394) is no longer running 00:06:13.044 20:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.044 20:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:13.044 20:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:13.044 20:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:13.044 20:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:13.044 20:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:13.044 20:56:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:13.044 20:56:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:13.044 20:56:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:13.044 20:56:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:13.044 20:56:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 771129 00:06:13.044 20:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 771129 ']' 00:06:13.044 20:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 771129 00:06:13.044 20:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:13.044 20:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:13.044 20:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 771129 00:06:13.044 20:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:13.044 20:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:13.044 20:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 771129' 00:06:13.044 killing process with pid 771129 00:06:13.044 20:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 771129 00:06:13.044 20:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 771129 00:06:13.303 00:06:13.303 real 0m1.871s 00:06:13.303 user 0m5.262s 00:06:13.303 sys 0m0.446s 00:06:13.303 20:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.303 20:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.303 ************************************ 00:06:13.303 END TEST locking_overlapped_coremask 00:06:13.303 ************************************ 00:06:13.303 20:56:40 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:13.303 20:56:40 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:13.303 20:56:40 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.303 20:56:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.303 20:56:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.303 ************************************ 00:06:13.303 START TEST locking_overlapped_coremask_via_rpc 00:06:13.303 ************************************ 00:06:13.303 20:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:13.303 20:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=771568 00:06:13.303 20:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 771568 /var/tmp/spdk.sock 00:06:13.303 20:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:13.303 20:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 771568 ']' 00:06:13.303 20:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.303 20:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.303 20:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.303 20:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.303 20:56:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.303 [2024-07-15 20:56:40.537327] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:13.303 [2024-07-15 20:56:40.537388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid771568 ] 00:06:13.303 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.563 [2024-07-15 20:56:40.606402] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:13.563 [2024-07-15 20:56:40.606428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:13.563 [2024-07-15 20:56:40.687074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.563 [2024-07-15 20:56:40.687169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.563 [2024-07-15 20:56:40.687169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.132 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.132 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:14.132 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=771707 00:06:14.132 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 771707 /var/tmp/spdk2.sock 00:06:14.132 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:14.132 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 771707 ']' 00:06:14.132 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.132 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.132 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.132 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.132 20:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.132 [2024-07-15 20:56:41.390880] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:14.132 [2024-07-15 20:56:41.390966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid771707 ] 00:06:14.391 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.391 [2024-07-15 20:56:41.486840] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:14.391 [2024-07-15 20:56:41.486867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:14.391 [2024-07-15 20:56:41.637550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.391 [2024-07-15 20:56:41.637668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.391 [2024-07-15 20:56:41.637670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:14.962 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.962 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:14.962 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:14.962 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.962 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.962 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.962 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:14.962 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:14.962 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:14.962 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:14.962 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:14.962 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:14.962 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:14.962 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:14.962 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.962 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.962 [2024-07-15 20:56:42.229505] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 771568 has claimed it. 00:06:14.962 request: 00:06:14.962 { 00:06:14.962 "method": "framework_enable_cpumask_locks", 00:06:14.962 "req_id": 1 00:06:14.962 } 00:06:14.962 Got JSON-RPC error response 00:06:14.962 response: 00:06:14.962 { 00:06:14.962 "code": -32603, 00:06:14.962 "message": "Failed to claim CPU core: 2" 00:06:14.962 } 00:06:14.962 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:14.962 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:14.962 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:14.962 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:14.962 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:14.962 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 771568 /var/tmp/spdk.sock 00:06:14.962 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 771568 ']' 00:06:14.962 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.962 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.962 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.962 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.962 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.221 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.221 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:15.221 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 771707 /var/tmp/spdk2.sock 00:06:15.221 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 771707 ']' 00:06:15.221 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.221 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.221 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.221 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.221 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.480 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.480 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:15.480 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:15.480 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:15.480 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:15.480 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:15.480 00:06:15.480 real 0m2.105s 00:06:15.480 user 0m0.827s 00:06:15.480 sys 0m0.207s 00:06:15.480 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.480 20:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.480 ************************************ 00:06:15.480 END TEST locking_overlapped_coremask_via_rpc 00:06:15.480 ************************************ 00:06:15.480 20:56:42 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:15.480 20:56:42 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:15.480 20:56:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 771568 ]] 00:06:15.480 20:56:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 771568 00:06:15.480 20:56:42 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 771568 ']' 00:06:15.480 20:56:42 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 771568 00:06:15.480 20:56:42 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:15.480 20:56:42 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:15.480 20:56:42 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 771568 00:06:15.480 20:56:42 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:15.480 20:56:42 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:15.480 20:56:42 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 771568' 00:06:15.480 killing process with pid 771568 00:06:15.480 20:56:42 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 771568 00:06:15.480 20:56:42 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 771568 00:06:15.738 20:56:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 771707 ]] 00:06:15.738 20:56:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 771707 00:06:15.738 20:56:43 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 771707 ']' 00:06:15.738 20:56:43 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 771707 00:06:15.738 20:56:43 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:15.738 20:56:43 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:15.998 20:56:43 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 771707 00:06:15.998 20:56:43 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:15.998 20:56:43 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:15.998 20:56:43 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 771707' 00:06:15.998 killing process with pid 771707 00:06:15.998 20:56:43 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 771707 00:06:15.998 20:56:43 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 771707 00:06:16.256 20:56:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:16.256 20:56:43 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:16.256 20:56:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 771568 ]] 00:06:16.256 20:56:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 771568 00:06:16.256 20:56:43 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 771568 ']' 00:06:16.256 20:56:43 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 771568 00:06:16.256 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (771568) - No such process 00:06:16.256 20:56:43 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 771568 is not found' 00:06:16.256 Process with pid 771568 is not found 00:06:16.256 20:56:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 771707 ]] 00:06:16.256 20:56:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 771707 00:06:16.256 20:56:43 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 771707 ']' 00:06:16.256 20:56:43 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 771707 00:06:16.256 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (771707) - No such process 00:06:16.256 20:56:43 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 771707 is not found' 00:06:16.256 Process with pid 771707 is not found 00:06:16.256 20:56:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:16.256 00:06:16.256 real 0m18.709s 00:06:16.256 user 0m30.894s 00:06:16.256 sys 0m6.109s 00:06:16.256 20:56:43 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.256 20:56:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.256 ************************************ 00:06:16.256 END TEST cpu_locks 00:06:16.256 ************************************ 00:06:16.256 20:56:43 event -- common/autotest_common.sh@1142 -- # return 0 00:06:16.256 00:06:16.256 real 0m43.914s 00:06:16.256 user 1m21.040s 00:06:16.256 sys 0m10.305s 00:06:16.256 20:56:43 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.256 20:56:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.256 ************************************ 00:06:16.256 END TEST event 00:06:16.256 ************************************ 00:06:16.256 20:56:43 -- common/autotest_common.sh@1142 -- # return 0 00:06:16.256 20:56:43 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:16.256 20:56:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.256 20:56:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.256 20:56:43 -- common/autotest_common.sh@10 -- # set +x 00:06:16.256 ************************************ 00:06:16.256 START TEST thread 00:06:16.256 ************************************ 00:06:16.256 20:56:43 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:16.516 * Looking for test storage... 00:06:16.516 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:06:16.516 20:56:43 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:16.516 20:56:43 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:16.516 20:56:43 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.516 20:56:43 thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.516 ************************************ 00:06:16.516 START TEST thread_poller_perf 00:06:16.516 ************************************ 00:06:16.516 20:56:43 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:16.516 [2024-07-15 20:56:43.654803] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:16.516 [2024-07-15 20:56:43.654890] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid772244 ] 00:06:16.516 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.516 [2024-07-15 20:56:43.724835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.516 [2024-07-15 20:56:43.798105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.516 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:17.896 ====================================== 00:06:17.896 busy:2504647404 (cyc) 00:06:17.896 total_run_count: 881000 00:06:17.896 tsc_hz: 2500000000 (cyc) 00:06:17.896 ====================================== 00:06:17.896 poller_cost: 2842 (cyc), 1136 (nsec) 00:06:17.896 00:06:17.896 real 0m1.226s 00:06:17.896 user 0m1.137s 00:06:17.896 sys 0m0.085s 00:06:17.896 20:56:44 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.896 20:56:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:17.896 ************************************ 00:06:17.896 END TEST thread_poller_perf 00:06:17.896 ************************************ 00:06:17.896 20:56:44 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:17.896 20:56:44 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:17.896 20:56:44 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:17.896 20:56:44 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.896 20:56:44 thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.896 ************************************ 00:06:17.896 START TEST thread_poller_perf 00:06:17.896 ************************************ 00:06:17.896 20:56:44 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:17.896 [2024-07-15 20:56:44.957134] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:17.896 [2024-07-15 20:56:44.957215] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid772415 ] 00:06:17.896 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.896 [2024-07-15 20:56:45.031120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.896 [2024-07-15 20:56:45.103435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.896 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:19.275 ====================================== 00:06:19.275 busy:2501310410 (cyc) 00:06:19.275 total_run_count: 14037000 00:06:19.275 tsc_hz: 2500000000 (cyc) 00:06:19.275 ====================================== 00:06:19.275 poller_cost: 178 (cyc), 71 (nsec) 00:06:19.275 00:06:19.275 real 0m1.227s 00:06:19.275 user 0m1.128s 00:06:19.275 sys 0m0.095s 00:06:19.275 20:56:46 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.275 20:56:46 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:19.275 ************************************ 00:06:19.275 END TEST thread_poller_perf 00:06:19.275 ************************************ 00:06:19.275 20:56:46 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:19.275 20:56:46 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:06:19.275 20:56:46 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:19.275 20:56:46 thread -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:19.275 20:56:46 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.275 20:56:46 thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.275 ************************************ 00:06:19.275 START TEST thread_spdk_lock 00:06:19.275 ************************************ 00:06:19.275 20:56:46 thread.thread_spdk_lock -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:19.275 [2024-07-15 20:56:46.261625] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:19.275 [2024-07-15 20:56:46.261709] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid772651 ] 00:06:19.275 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.275 [2024-07-15 20:56:46.335521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.275 [2024-07-15 20:56:46.406119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.275 [2024-07-15 20:56:46.406122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.843 [2024-07-15 20:56:46.889464] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 965:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:19.843 [2024-07-15 20:56:46.889508] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:06:19.843 [2024-07-15 20:56:46.889519] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x14d3ac0 00:06:19.843 [2024-07-15 20:56:46.890519] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:19.843 [2024-07-15 20:56:46.890623] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1026:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:19.843 [2024-07-15 20:56:46.890642] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:19.843 Starting test contend 00:06:19.843 Worker Delay Wait us Hold us Total us 00:06:19.843 0 3 174483 181925 356409 00:06:19.843 1 5 92751 283328 376079 00:06:19.843 PASS test contend 00:06:19.843 Starting test hold_by_poller 00:06:19.843 PASS test hold_by_poller 00:06:19.843 Starting test hold_by_message 00:06:19.843 PASS test hold_by_message 00:06:19.843 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:06:19.843 100014 assertions passed 00:06:19.843 0 assertions failed 00:06:19.843 00:06:19.843 real 0m0.709s 00:06:19.843 user 0m1.096s 00:06:19.843 sys 0m0.094s 00:06:19.844 20:56:46 thread.thread_spdk_lock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.844 20:56:46 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:06:19.844 ************************************ 00:06:19.844 END TEST thread_spdk_lock 00:06:19.844 ************************************ 00:06:19.844 20:56:46 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:19.844 00:06:19.844 real 0m3.481s 00:06:19.844 user 0m3.485s 00:06:19.844 sys 0m0.494s 00:06:19.844 20:56:46 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.844 20:56:46 thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.844 ************************************ 00:06:19.844 END TEST thread 00:06:19.844 ************************************ 00:06:19.844 20:56:47 -- common/autotest_common.sh@1142 -- # return 0 00:06:19.844 20:56:47 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:06:19.844 20:56:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:19.844 20:56:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.844 20:56:47 -- common/autotest_common.sh@10 -- # set +x 00:06:19.844 ************************************ 00:06:19.844 START TEST accel 00:06:19.844 ************************************ 00:06:19.844 20:56:47 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:06:20.103 * Looking for test storage... 00:06:20.103 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:06:20.103 20:56:47 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:20.103 20:56:47 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:20.103 20:56:47 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:20.103 20:56:47 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=772970 00:06:20.103 20:56:47 accel -- accel/accel.sh@63 -- # waitforlisten 772970 00:06:20.103 20:56:47 accel -- common/autotest_common.sh@829 -- # '[' -z 772970 ']' 00:06:20.103 20:56:47 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.103 20:56:47 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.104 20:56:47 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:20.104 20:56:47 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:20.104 20:56:47 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.104 20:56:47 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.104 20:56:47 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.104 20:56:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.104 20:56:47 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.104 20:56:47 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.104 20:56:47 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.104 20:56:47 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.104 20:56:47 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:20.104 20:56:47 accel -- accel/accel.sh@41 -- # jq -r . 00:06:20.104 [2024-07-15 20:56:47.205418] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:20.104 [2024-07-15 20:56:47.205481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid772970 ] 00:06:20.104 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.104 [2024-07-15 20:56:47.272026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.104 [2024-07-15 20:56:47.349496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.040 20:56:48 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.040 20:56:48 accel -- common/autotest_common.sh@862 -- # return 0 00:06:21.040 20:56:48 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:21.040 20:56:48 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:21.040 20:56:48 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:21.040 20:56:48 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:21.040 20:56:48 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:21.040 20:56:48 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:21.040 20:56:48 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.040 20:56:48 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:21.040 20:56:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.040 20:56:48 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.040 20:56:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.040 20:56:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:21.040 20:56:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:21.040 20:56:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:21.040 20:56:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.040 20:56:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:21.040 20:56:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:21.040 20:56:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:21.040 20:56:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.040 20:56:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:21.040 20:56:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:21.040 20:56:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:21.040 20:56:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.040 20:56:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:21.040 20:56:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:21.040 20:56:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:21.041 20:56:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.041 20:56:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:21.041 20:56:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:21.041 20:56:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:21.041 20:56:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.041 20:56:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:21.041 20:56:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:21.041 20:56:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:21.041 20:56:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.041 20:56:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:21.041 20:56:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:21.041 20:56:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:21.041 20:56:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.041 20:56:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:21.041 20:56:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:21.041 20:56:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:21.041 20:56:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.041 20:56:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:21.041 20:56:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:21.041 20:56:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:21.041 20:56:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.041 20:56:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:21.041 20:56:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:21.041 20:56:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:21.041 20:56:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.041 20:56:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:21.041 20:56:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:21.041 20:56:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:21.041 20:56:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.041 20:56:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:21.041 20:56:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:21.041 20:56:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:21.041 20:56:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.041 20:56:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:21.041 20:56:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:21.041 20:56:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:21.041 20:56:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.041 20:56:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:21.041 20:56:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:21.041 20:56:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:21.041 20:56:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.041 20:56:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:21.041 20:56:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:21.041 20:56:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:21.041 20:56:48 accel -- accel/accel.sh@75 -- # killprocess 772970 00:06:21.041 20:56:48 accel -- common/autotest_common.sh@948 -- # '[' -z 772970 ']' 00:06:21.041 20:56:48 accel -- common/autotest_common.sh@952 -- # kill -0 772970 00:06:21.041 20:56:48 accel -- common/autotest_common.sh@953 -- # uname 00:06:21.041 20:56:48 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:21.041 20:56:48 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 772970 00:06:21.041 20:56:48 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:21.041 20:56:48 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:21.041 20:56:48 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 772970' 00:06:21.041 killing process with pid 772970 00:06:21.041 20:56:48 accel -- common/autotest_common.sh@967 -- # kill 772970 00:06:21.041 20:56:48 accel -- common/autotest_common.sh@972 -- # wait 772970 00:06:21.301 20:56:48 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:21.301 20:56:48 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:21.301 20:56:48 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:21.301 20:56:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.301 20:56:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.301 20:56:48 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:21.301 20:56:48 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:21.301 20:56:48 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:21.301 20:56:48 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.301 20:56:48 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.301 20:56:48 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.301 20:56:48 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.301 20:56:48 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.301 20:56:48 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:21.301 20:56:48 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:21.301 20:56:48 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.301 20:56:48 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:21.301 20:56:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:21.301 20:56:48 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:21.301 20:56:48 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:21.301 20:56:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.301 20:56:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.301 ************************************ 00:06:21.301 START TEST accel_missing_filename 00:06:21.301 ************************************ 00:06:21.301 20:56:48 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:21.301 20:56:48 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:21.301 20:56:48 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:21.301 20:56:48 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:21.301 20:56:48 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.301 20:56:48 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:21.560 20:56:48 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.560 20:56:48 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:21.560 20:56:48 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:21.560 20:56:48 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:21.560 20:56:48 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.560 20:56:48 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.560 20:56:48 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.560 20:56:48 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.560 20:56:48 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.560 20:56:48 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:21.560 20:56:48 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:21.560 [2024-07-15 20:56:48.611814] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:21.560 [2024-07-15 20:56:48.611901] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid773272 ] 00:06:21.560 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.560 [2024-07-15 20:56:48.682332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.560 [2024-07-15 20:56:48.753336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.560 [2024-07-15 20:56:48.793174] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:21.819 [2024-07-15 20:56:48.852849] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:21.819 A filename is required. 00:06:21.819 20:56:48 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:21.819 20:56:48 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:21.819 20:56:48 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:21.819 20:56:48 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:21.819 20:56:48 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:21.819 20:56:48 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:21.819 00:06:21.819 real 0m0.332s 00:06:21.819 user 0m0.234s 00:06:21.819 sys 0m0.134s 00:06:21.819 20:56:48 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.819 20:56:48 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:21.819 ************************************ 00:06:21.819 END TEST accel_missing_filename 00:06:21.819 ************************************ 00:06:21.819 20:56:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:21.819 20:56:48 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:21.819 20:56:48 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:21.819 20:56:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.819 20:56:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.819 ************************************ 00:06:21.819 START TEST accel_compress_verify 00:06:21.819 ************************************ 00:06:21.819 20:56:49 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:21.819 20:56:49 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:21.819 20:56:49 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:21.819 20:56:49 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:21.819 20:56:49 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.819 20:56:49 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:21.819 20:56:49 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.819 20:56:49 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:21.819 20:56:49 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:21.819 20:56:49 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:21.819 20:56:49 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.819 20:56:49 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.820 20:56:49 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.820 20:56:49 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.820 20:56:49 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.820 20:56:49 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:21.820 20:56:49 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:21.820 [2024-07-15 20:56:49.026894] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:21.820 [2024-07-15 20:56:49.026984] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid773297 ] 00:06:21.820 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.820 [2024-07-15 20:56:49.096639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.079 [2024-07-15 20:56:49.167474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.079 [2024-07-15 20:56:49.207088] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:22.079 [2024-07-15 20:56:49.266601] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:22.079 00:06:22.079 Compression does not support the verify option, aborting. 00:06:22.079 20:56:49 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:22.079 20:56:49 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:22.079 20:56:49 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:22.079 20:56:49 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:22.079 20:56:49 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:22.079 20:56:49 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:22.079 00:06:22.079 real 0m0.330s 00:06:22.079 user 0m0.238s 00:06:22.079 sys 0m0.129s 00:06:22.079 20:56:49 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.079 20:56:49 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:22.079 ************************************ 00:06:22.079 END TEST accel_compress_verify 00:06:22.079 ************************************ 00:06:22.339 20:56:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:22.339 20:56:49 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:22.339 20:56:49 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:22.339 20:56:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.339 20:56:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.339 ************************************ 00:06:22.339 START TEST accel_wrong_workload 00:06:22.339 ************************************ 00:06:22.339 20:56:49 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:22.339 20:56:49 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:22.339 20:56:49 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:22.339 20:56:49 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:22.339 20:56:49 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.339 20:56:49 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:22.339 20:56:49 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.339 20:56:49 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:22.339 20:56:49 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:22.339 20:56:49 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:22.339 20:56:49 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.339 20:56:49 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.339 20:56:49 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.339 20:56:49 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.339 20:56:49 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.339 20:56:49 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:22.339 20:56:49 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:22.339 Unsupported workload type: foobar 00:06:22.339 [2024-07-15 20:56:49.439408] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:22.339 accel_perf options: 00:06:22.339 [-h help message] 00:06:22.339 [-q queue depth per core] 00:06:22.339 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:22.339 [-T number of threads per core 00:06:22.339 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:22.339 [-t time in seconds] 00:06:22.339 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:22.339 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:22.339 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:22.339 [-l for compress/decompress workloads, name of uncompressed input file 00:06:22.339 [-S for crc32c workload, use this seed value (default 0) 00:06:22.339 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:22.339 [-f for fill workload, use this BYTE value (default 255) 00:06:22.339 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:22.339 [-y verify result if this switch is on] 00:06:22.339 [-a tasks to allocate per core (default: same value as -q)] 00:06:22.339 Can be used to spread operations across a wider range of memory. 00:06:22.339 20:56:49 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:22.339 20:56:49 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:22.339 20:56:49 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:22.339 20:56:49 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:22.339 00:06:22.339 real 0m0.029s 00:06:22.339 user 0m0.009s 00:06:22.339 sys 0m0.020s 00:06:22.339 20:56:49 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.339 20:56:49 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:22.339 ************************************ 00:06:22.339 END TEST accel_wrong_workload 00:06:22.339 ************************************ 00:06:22.339 Error: writing output failed: Broken pipe 00:06:22.339 20:56:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:22.339 20:56:49 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:22.339 20:56:49 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:22.339 20:56:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.339 20:56:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.339 ************************************ 00:06:22.339 START TEST accel_negative_buffers 00:06:22.339 ************************************ 00:06:22.339 20:56:49 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:22.339 20:56:49 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:22.339 20:56:49 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:22.339 20:56:49 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:22.339 20:56:49 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.339 20:56:49 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:22.339 20:56:49 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.339 20:56:49 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:22.339 20:56:49 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:22.339 20:56:49 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:22.339 20:56:49 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.339 20:56:49 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.339 20:56:49 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.340 20:56:49 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.340 20:56:49 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.340 20:56:49 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:22.340 20:56:49 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:22.340 -x option must be non-negative. 00:06:22.340 [2024-07-15 20:56:49.536385] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:22.340 accel_perf options: 00:06:22.340 [-h help message] 00:06:22.340 [-q queue depth per core] 00:06:22.340 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:22.340 [-T number of threads per core 00:06:22.340 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:22.340 [-t time in seconds] 00:06:22.340 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:22.340 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:22.340 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:22.340 [-l for compress/decompress workloads, name of uncompressed input file 00:06:22.340 [-S for crc32c workload, use this seed value (default 0) 00:06:22.340 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:22.340 [-f for fill workload, use this BYTE value (default 255) 00:06:22.340 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:22.340 [-y verify result if this switch is on] 00:06:22.340 [-a tasks to allocate per core (default: same value as -q)] 00:06:22.340 Can be used to spread operations across a wider range of memory. 00:06:22.340 20:56:49 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:22.340 20:56:49 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:22.340 20:56:49 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:22.340 20:56:49 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:22.340 00:06:22.340 real 0m0.019s 00:06:22.340 user 0m0.009s 00:06:22.340 sys 0m0.011s 00:06:22.340 20:56:49 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.340 20:56:49 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:22.340 ************************************ 00:06:22.340 END TEST accel_negative_buffers 00:06:22.340 ************************************ 00:06:22.340 Error: writing output failed: Broken pipe 00:06:22.340 20:56:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:22.340 20:56:49 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:22.340 20:56:49 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:22.340 20:56:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.340 20:56:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.340 ************************************ 00:06:22.340 START TEST accel_crc32c 00:06:22.340 ************************************ 00:06:22.340 20:56:49 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:22.340 20:56:49 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:22.340 20:56:49 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:22.340 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.340 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.340 20:56:49 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:22.340 20:56:49 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:22.340 20:56:49 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:22.340 20:56:49 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.340 20:56:49 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.340 20:56:49 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.340 20:56:49 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.340 20:56:49 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.340 20:56:49 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:22.340 20:56:49 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:22.600 [2024-07-15 20:56:49.641628] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:22.600 [2024-07-15 20:56:49.641716] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid773452 ] 00:06:22.600 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.600 [2024-07-15 20:56:49.711967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.600 [2024-07-15 20:56:49.788453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.600 20:56:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.980 20:56:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.980 20:56:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.980 20:56:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.980 20:56:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.980 20:56:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.980 20:56:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.980 20:56:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.980 20:56:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.980 20:56:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.980 20:56:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.980 20:56:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.980 20:56:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.980 20:56:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.980 20:56:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.980 20:56:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.980 20:56:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.980 20:56:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.980 20:56:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.980 20:56:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.980 20:56:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.980 20:56:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.980 20:56:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.980 20:56:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.980 20:56:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.980 20:56:50 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.980 20:56:50 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:23.980 20:56:50 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.980 00:06:23.980 real 0m1.343s 00:06:23.980 user 0m1.228s 00:06:23.980 sys 0m0.128s 00:06:23.980 20:56:50 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.980 20:56:50 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:23.980 ************************************ 00:06:23.980 END TEST accel_crc32c 00:06:23.980 ************************************ 00:06:23.980 20:56:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:23.980 20:56:51 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:23.980 20:56:51 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:23.980 20:56:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.980 20:56:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.980 ************************************ 00:06:23.980 START TEST accel_crc32c_C2 00:06:23.980 ************************************ 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:23.980 [2024-07-15 20:56:51.068757] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:23.980 [2024-07-15 20:56:51.068839] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid773674 ] 00:06:23.980 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.980 [2024-07-15 20:56:51.139143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.980 [2024-07-15 20:56:51.210874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.980 20:56:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.359 20:56:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.359 20:56:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.359 20:56:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.359 20:56:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.359 20:56:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.359 20:56:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.359 20:56:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.359 20:56:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.359 20:56:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.359 20:56:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.359 20:56:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.359 20:56:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.359 20:56:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.359 20:56:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.359 20:56:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.359 20:56:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.359 20:56:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.359 20:56:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.359 20:56:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.359 20:56:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.359 20:56:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.359 20:56:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.359 20:56:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.359 20:56:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.359 20:56:52 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.359 20:56:52 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:25.359 20:56:52 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.359 00:06:25.359 real 0m1.338s 00:06:25.359 user 0m1.210s 00:06:25.359 sys 0m0.142s 00:06:25.359 20:56:52 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.359 20:56:52 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:25.359 ************************************ 00:06:25.359 END TEST accel_crc32c_C2 00:06:25.359 ************************************ 00:06:25.359 20:56:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:25.359 20:56:52 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:25.359 20:56:52 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:25.359 20:56:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.359 20:56:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.359 ************************************ 00:06:25.359 START TEST accel_copy 00:06:25.359 ************************************ 00:06:25.359 20:56:52 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:25.359 20:56:52 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:25.359 20:56:52 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:25.359 20:56:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.359 20:56:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.359 20:56:52 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:25.359 20:56:52 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:25.359 20:56:52 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:25.359 20:56:52 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.359 20:56:52 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.359 20:56:52 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.359 20:56:52 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.359 20:56:52 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.359 20:56:52 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:25.359 20:56:52 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:25.359 [2024-07-15 20:56:52.491168] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:25.359 [2024-07-15 20:56:52.491248] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid773938 ] 00:06:25.359 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.359 [2024-07-15 20:56:52.563167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.359 [2024-07-15 20:56:52.634843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.618 20:56:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:25.618 20:56:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.618 20:56:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.618 20:56:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.618 20:56:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:25.618 20:56:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.618 20:56:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.618 20:56:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.618 20:56:52 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:25.618 20:56:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.618 20:56:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.618 20:56:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.618 20:56:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:25.618 20:56:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.618 20:56:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.618 20:56:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.618 20:56:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:25.618 20:56:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.618 20:56:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.618 20:56:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.618 20:56:52 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:25.618 20:56:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.618 20:56:52 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:25.618 20:56:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.618 20:56:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.618 20:56:52 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.618 20:56:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.618 20:56:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.618 20:56:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.618 20:56:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.619 20:56:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.631 20:56:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:26.631 20:56:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.631 20:56:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.631 20:56:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.631 20:56:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:26.631 20:56:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.631 20:56:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.631 20:56:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.631 20:56:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:26.631 20:56:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.631 20:56:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.631 20:56:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.631 20:56:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:26.631 20:56:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.631 20:56:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.631 20:56:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.631 20:56:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:26.631 20:56:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.631 20:56:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.631 20:56:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.632 20:56:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:26.632 20:56:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.632 20:56:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.632 20:56:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.632 20:56:53 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.632 20:56:53 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:26.632 20:56:53 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.632 00:06:26.632 real 0m1.340s 00:06:26.632 user 0m1.224s 00:06:26.632 sys 0m0.130s 00:06:26.632 20:56:53 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.632 20:56:53 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:26.632 ************************************ 00:06:26.632 END TEST accel_copy 00:06:26.632 ************************************ 00:06:26.632 20:56:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:26.632 20:56:53 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:26.632 20:56:53 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:26.632 20:56:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.632 20:56:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.632 ************************************ 00:06:26.632 START TEST accel_fill 00:06:26.632 ************************************ 00:06:26.632 20:56:53 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:26.632 20:56:53 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:26.632 20:56:53 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:26.632 20:56:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.632 20:56:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.632 20:56:53 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:26.632 20:56:53 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:26.632 20:56:53 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:26.632 20:56:53 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.632 20:56:53 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.632 20:56:53 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.632 20:56:53 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.632 20:56:53 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.632 20:56:53 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:26.632 20:56:53 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:26.632 [2024-07-15 20:56:53.914954] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:26.632 [2024-07-15 20:56:53.915031] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid774221 ] 00:06:26.890 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.890 [2024-07-15 20:56:53.986432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.890 [2024-07-15 20:56:54.057656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.890 20:56:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:28.266 20:56:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:28.266 20:56:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:28.266 20:56:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:28.266 20:56:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:28.266 20:56:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:28.266 20:56:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:28.266 20:56:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:28.266 20:56:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:28.266 20:56:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:28.266 20:56:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:28.266 20:56:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:28.266 20:56:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:28.266 20:56:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:28.266 20:56:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:28.266 20:56:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:28.266 20:56:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:28.266 20:56:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:28.266 20:56:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:28.266 20:56:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:28.266 20:56:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:28.266 20:56:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:28.266 20:56:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:28.266 20:56:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:28.266 20:56:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:28.266 20:56:55 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.266 20:56:55 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:28.266 20:56:55 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.266 00:06:28.267 real 0m1.341s 00:06:28.267 user 0m1.229s 00:06:28.267 sys 0m0.127s 00:06:28.267 20:56:55 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.267 20:56:55 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:28.267 ************************************ 00:06:28.267 END TEST accel_fill 00:06:28.267 ************************************ 00:06:28.267 20:56:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:28.267 20:56:55 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:28.267 20:56:55 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:28.267 20:56:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.267 20:56:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.267 ************************************ 00:06:28.267 START TEST accel_copy_crc32c 00:06:28.267 ************************************ 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:28.267 [2024-07-15 20:56:55.338563] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:28.267 [2024-07-15 20:56:55.338652] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid774508 ] 00:06:28.267 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.267 [2024-07-15 20:56:55.407957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.267 [2024-07-15 20:56:55.479377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.267 20:56:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.642 20:56:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.642 20:56:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.642 20:56:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.642 20:56:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.642 20:56:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.642 20:56:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.642 20:56:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.642 20:56:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.642 20:56:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.642 20:56:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.642 20:56:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.642 20:56:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.642 20:56:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.642 20:56:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.642 20:56:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.642 20:56:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.642 20:56:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.642 20:56:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.642 20:56:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.642 20:56:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.642 20:56:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.642 20:56:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.642 20:56:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.642 20:56:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.643 20:56:56 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.643 20:56:56 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:29.643 20:56:56 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.643 00:06:29.643 real 0m1.335s 00:06:29.643 user 0m1.222s 00:06:29.643 sys 0m0.127s 00:06:29.643 20:56:56 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.643 20:56:56 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:29.643 ************************************ 00:06:29.643 END TEST accel_copy_crc32c 00:06:29.643 ************************************ 00:06:29.643 20:56:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:29.643 20:56:56 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:29.643 20:56:56 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:29.643 20:56:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.643 20:56:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.643 ************************************ 00:06:29.643 START TEST accel_copy_crc32c_C2 00:06:29.643 ************************************ 00:06:29.643 20:56:56 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:29.643 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:29.643 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:29.643 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.643 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.643 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:29.643 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:29.643 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.643 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.643 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.643 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.643 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.643 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.643 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:29.643 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:29.643 [2024-07-15 20:56:56.760537] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:29.643 [2024-07-15 20:56:56.760622] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid774787 ] 00:06:29.643 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.643 [2024-07-15 20:56:56.831276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.643 [2024-07-15 20:56:56.904125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.902 20:56:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.837 20:56:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.837 20:56:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.837 20:56:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.837 20:56:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.837 20:56:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.837 20:56:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.837 20:56:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.837 20:56:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.837 20:56:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.837 20:56:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.837 20:56:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.837 20:56:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.837 20:56:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.837 20:56:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.837 20:56:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.837 20:56:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.837 20:56:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.837 20:56:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.837 20:56:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.837 20:56:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.837 20:56:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.838 20:56:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.838 20:56:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.838 20:56:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.838 20:56:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.838 20:56:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:30.838 20:56:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.838 00:06:30.838 real 0m1.341s 00:06:30.838 user 0m1.216s 00:06:30.838 sys 0m0.139s 00:06:30.838 20:56:58 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.838 20:56:58 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:30.838 ************************************ 00:06:30.838 END TEST accel_copy_crc32c_C2 00:06:30.838 ************************************ 00:06:30.838 20:56:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:30.838 20:56:58 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:30.838 20:56:58 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:30.838 20:56:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.838 20:56:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.097 ************************************ 00:06:31.097 START TEST accel_dualcast 00:06:31.097 ************************************ 00:06:31.097 20:56:58 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:31.097 [2024-07-15 20:56:58.186835] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:31.097 [2024-07-15 20:56:58.186919] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid775072 ] 00:06:31.097 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.097 [2024-07-15 20:56:58.257509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.097 [2024-07-15 20:56:58.327373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 20:56:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.098 20:56:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.098 20:56:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:31.098 20:56:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.098 20:56:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.098 20:56:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.098 20:56:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:31.098 20:56:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.098 20:56:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.098 20:56:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:32.475 20:56:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:32.475 20:56:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:32.476 20:56:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:32.476 20:56:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:32.476 20:56:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:32.476 20:56:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:32.476 20:56:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:32.476 20:56:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:32.476 20:56:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:32.476 20:56:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:32.476 20:56:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:32.476 20:56:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:32.476 20:56:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:32.476 20:56:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:32.476 20:56:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:32.476 20:56:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:32.476 20:56:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:32.476 20:56:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:32.476 20:56:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:32.476 20:56:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:32.476 20:56:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:32.476 20:56:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:32.476 20:56:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:32.476 20:56:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:32.476 20:56:59 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.476 20:56:59 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:32.476 20:56:59 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.476 00:06:32.476 real 0m1.337s 00:06:32.476 user 0m1.222s 00:06:32.476 sys 0m0.129s 00:06:32.476 20:56:59 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.476 20:56:59 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:32.476 ************************************ 00:06:32.476 END TEST accel_dualcast 00:06:32.476 ************************************ 00:06:32.476 20:56:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:32.476 20:56:59 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:32.476 20:56:59 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:32.476 20:56:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.476 20:56:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.476 ************************************ 00:06:32.476 START TEST accel_compare 00:06:32.476 ************************************ 00:06:32.476 20:56:59 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:32.476 20:56:59 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:32.476 20:56:59 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:32.476 20:56:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.476 20:56:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.476 20:56:59 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:32.476 20:56:59 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:32.476 20:56:59 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:32.476 20:56:59 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.476 20:56:59 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.476 20:56:59 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.476 20:56:59 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.476 20:56:59 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.476 20:56:59 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:32.476 20:56:59 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:32.476 [2024-07-15 20:56:59.604954] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:32.476 [2024-07-15 20:56:59.605038] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid775358 ] 00:06:32.476 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.476 [2024-07-15 20:56:59.674607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.476 [2024-07-15 20:56:59.745827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:32.735 20:56:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.736 20:56:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.736 20:56:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.736 20:56:59 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:32.736 20:56:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.736 20:56:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.736 20:56:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.736 20:56:59 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.736 20:56:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.736 20:56:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.736 20:56:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.736 20:56:59 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:32.736 20:56:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.736 20:56:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.736 20:56:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.736 20:56:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:32.736 20:56:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.736 20:56:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.736 20:56:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.736 20:56:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:32.736 20:56:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.736 20:56:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.736 20:56:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.673 20:57:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:33.673 20:57:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:33.673 20:57:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:33.673 20:57:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.673 20:57:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:33.673 20:57:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:33.673 20:57:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:33.673 20:57:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.673 20:57:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:33.673 20:57:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:33.673 20:57:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:33.673 20:57:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.673 20:57:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:33.673 20:57:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:33.673 20:57:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:33.673 20:57:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.673 20:57:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:33.673 20:57:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:33.673 20:57:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:33.673 20:57:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.674 20:57:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:33.674 20:57:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:33.674 20:57:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:33.674 20:57:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.674 20:57:00 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.674 20:57:00 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:33.674 20:57:00 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.674 00:06:33.674 real 0m1.339s 00:06:33.674 user 0m1.223s 00:06:33.674 sys 0m0.130s 00:06:33.674 20:57:00 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.674 20:57:00 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:33.674 ************************************ 00:06:33.674 END TEST accel_compare 00:06:33.674 ************************************ 00:06:33.674 20:57:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:33.933 20:57:00 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:33.933 20:57:00 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:33.933 20:57:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.933 20:57:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.933 ************************************ 00:06:33.933 START TEST accel_xor 00:06:33.933 ************************************ 00:06:33.933 20:57:01 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:33.933 20:57:01 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:33.933 20:57:01 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:33.933 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.933 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.933 20:57:01 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:33.933 20:57:01 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:33.933 20:57:01 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:33.933 20:57:01 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.933 20:57:01 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.933 20:57:01 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.933 20:57:01 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.933 20:57:01 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.933 20:57:01 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:33.933 20:57:01 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:33.933 [2024-07-15 20:57:01.026012] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:33.933 [2024-07-15 20:57:01.026089] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid775714 ] 00:06:33.933 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.933 [2024-07-15 20:57:01.096155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.933 [2024-07-15 20:57:01.166833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.933 20:57:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.933 20:57:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.933 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.933 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.933 20:57:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.933 20:57:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.933 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.933 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.933 20:57:01 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:33.933 20:57:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.933 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.933 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.933 20:57:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.934 20:57:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.314 20:57:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.314 20:57:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.314 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.314 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.314 20:57:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.314 20:57:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.314 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.314 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.314 20:57:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.314 20:57:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.314 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.314 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.314 20:57:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.314 20:57:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.314 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.314 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.314 20:57:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.314 20:57:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.314 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.314 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.314 20:57:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.314 20:57:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.315 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.315 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.315 20:57:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.315 20:57:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:35.315 20:57:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.315 00:06:35.315 real 0m1.336s 00:06:35.315 user 0m1.216s 00:06:35.315 sys 0m0.133s 00:06:35.315 20:57:02 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.315 20:57:02 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:35.315 ************************************ 00:06:35.315 END TEST accel_xor 00:06:35.315 ************************************ 00:06:35.315 20:57:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:35.315 20:57:02 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:35.315 20:57:02 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:35.315 20:57:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.315 20:57:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.315 ************************************ 00:06:35.315 START TEST accel_xor 00:06:35.315 ************************************ 00:06:35.315 20:57:02 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:35.315 20:57:02 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:35.315 20:57:02 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:35.315 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.315 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.315 20:57:02 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:35.315 20:57:02 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:35.315 20:57:02 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:35.315 20:57:02 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.315 20:57:02 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.315 20:57:02 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.315 20:57:02 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.315 20:57:02 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.315 20:57:02 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:35.315 20:57:02 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:35.315 [2024-07-15 20:57:02.444336] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:35.315 [2024-07-15 20:57:02.444417] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid776054 ] 00:06:35.315 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.315 [2024-07-15 20:57:02.514239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.315 [2024-07-15 20:57:02.585325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.575 20:57:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.516 20:57:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.516 20:57:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.516 20:57:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.516 20:57:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.516 20:57:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.516 20:57:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.516 20:57:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.516 20:57:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.516 20:57:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.516 20:57:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.516 20:57:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.516 20:57:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.516 20:57:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.516 20:57:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.516 20:57:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.516 20:57:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.516 20:57:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.516 20:57:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.516 20:57:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.516 20:57:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.516 20:57:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.516 20:57:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.516 20:57:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.516 20:57:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.516 20:57:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.516 20:57:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:36.516 20:57:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.516 00:06:36.516 real 0m1.338s 00:06:36.516 user 0m1.222s 00:06:36.516 sys 0m0.129s 00:06:36.516 20:57:03 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.516 20:57:03 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:36.516 ************************************ 00:06:36.516 END TEST accel_xor 00:06:36.516 ************************************ 00:06:36.516 20:57:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:36.516 20:57:03 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:36.516 20:57:03 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:36.516 20:57:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.516 20:57:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.775 ************************************ 00:06:36.775 START TEST accel_dif_verify 00:06:36.775 ************************************ 00:06:36.775 20:57:03 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:36.775 20:57:03 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:36.775 20:57:03 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:36.775 20:57:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.775 20:57:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.775 20:57:03 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:36.775 20:57:03 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:36.775 20:57:03 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:36.775 20:57:03 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.775 20:57:03 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.776 20:57:03 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.776 20:57:03 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.776 20:57:03 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.776 20:57:03 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:36.776 20:57:03 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:36.776 [2024-07-15 20:57:03.867851] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:36.776 [2024-07-15 20:57:03.867932] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid776379 ] 00:06:36.776 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.776 [2024-07-15 20:57:03.938798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.776 [2024-07-15 20:57:04.009800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.776 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:37.035 20:57:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:37.035 20:57:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:37.035 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:37.035 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:37.035 20:57:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:37.035 20:57:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:37.035 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:37.035 20:57:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:37.973 20:57:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:37.973 20:57:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:37.973 20:57:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:37.973 20:57:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:37.973 20:57:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:37.973 20:57:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:37.973 20:57:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:37.973 20:57:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:37.973 20:57:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:37.973 20:57:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:37.973 20:57:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:37.973 20:57:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:37.973 20:57:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:37.973 20:57:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:37.973 20:57:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:37.973 20:57:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:37.973 20:57:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:37.973 20:57:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:37.973 20:57:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:37.973 20:57:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:37.973 20:57:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:37.973 20:57:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:37.973 20:57:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:37.973 20:57:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:37.973 20:57:05 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.973 20:57:05 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:37.973 20:57:05 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.973 00:06:37.973 real 0m1.340s 00:06:37.973 user 0m1.226s 00:06:37.973 sys 0m0.129s 00:06:37.973 20:57:05 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.973 20:57:05 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:37.973 ************************************ 00:06:37.973 END TEST accel_dif_verify 00:06:37.973 ************************************ 00:06:37.973 20:57:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:37.973 20:57:05 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:37.973 20:57:05 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:37.973 20:57:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.973 20:57:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.973 ************************************ 00:06:37.973 START TEST accel_dif_generate 00:06:37.973 ************************************ 00:06:37.973 20:57:05 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:37.973 20:57:05 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:37.973 20:57:05 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:37.973 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:37.973 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:37.973 20:57:05 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:38.232 20:57:05 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:38.233 [2024-07-15 20:57:05.282664] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:38.233 [2024-07-15 20:57:05.282746] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid776975 ] 00:06:38.233 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.233 [2024-07-15 20:57:05.352926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.233 [2024-07-15 20:57:05.422948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.233 20:57:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.613 20:57:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:39.613 20:57:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.613 20:57:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.613 20:57:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.613 20:57:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:39.613 20:57:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.613 20:57:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.613 20:57:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.613 20:57:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:39.613 20:57:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.613 20:57:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.613 20:57:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.613 20:57:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:39.613 20:57:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.613 20:57:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.613 20:57:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.613 20:57:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:39.613 20:57:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.613 20:57:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.613 20:57:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.613 20:57:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:39.613 20:57:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.613 20:57:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.613 20:57:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.613 20:57:06 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.613 20:57:06 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:39.613 20:57:06 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.613 00:06:39.613 real 0m1.338s 00:06:39.613 user 0m1.221s 00:06:39.613 sys 0m0.132s 00:06:39.613 20:57:06 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.613 20:57:06 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:39.613 ************************************ 00:06:39.613 END TEST accel_dif_generate 00:06:39.613 ************************************ 00:06:39.613 20:57:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:39.613 20:57:06 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:39.613 20:57:06 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:39.613 20:57:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.613 20:57:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.613 ************************************ 00:06:39.613 START TEST accel_dif_generate_copy 00:06:39.613 ************************************ 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:39.613 [2024-07-15 20:57:06.705343] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:39.613 [2024-07-15 20:57:06.705427] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid777319 ] 00:06:39.613 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.613 [2024-07-15 20:57:06.777400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.613 [2024-07-15 20:57:06.849694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.613 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:39.614 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.614 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.614 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.614 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:39.614 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.614 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.614 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.614 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.614 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.614 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.614 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.614 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:39.873 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.873 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.873 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.873 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:39.873 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.873 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.873 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.873 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:39.873 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.873 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.873 20:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.810 20:57:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:40.810 20:57:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.810 20:57:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.810 20:57:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.810 20:57:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:40.810 20:57:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.810 20:57:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.810 20:57:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.810 20:57:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:40.810 20:57:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.810 20:57:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.810 20:57:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.810 20:57:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:40.810 20:57:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.810 20:57:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.810 20:57:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.810 20:57:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:40.810 20:57:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.810 20:57:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.810 20:57:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.810 20:57:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:40.810 20:57:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.810 20:57:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.810 20:57:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.810 20:57:08 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.810 20:57:08 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:40.810 20:57:08 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.810 00:06:40.810 real 0m1.340s 00:06:40.810 user 0m1.222s 00:06:40.810 sys 0m0.132s 00:06:40.810 20:57:08 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.810 20:57:08 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:40.810 ************************************ 00:06:40.810 END TEST accel_dif_generate_copy 00:06:40.810 ************************************ 00:06:40.810 20:57:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:40.810 20:57:08 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:40.810 20:57:08 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:40.810 20:57:08 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:40.810 20:57:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.810 20:57:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.069 ************************************ 00:06:41.069 START TEST accel_comp 00:06:41.069 ************************************ 00:06:41.069 20:57:08 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:41.069 20:57:08 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:41.069 20:57:08 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:41.069 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.069 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.069 20:57:08 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:41.070 [2024-07-15 20:57:08.130265] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:41.070 [2024-07-15 20:57:08.130347] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid777533 ] 00:06:41.070 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.070 [2024-07-15 20:57:08.201162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.070 [2024-07-15 20:57:08.271733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.070 20:57:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.448 20:57:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:42.448 20:57:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.448 20:57:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.448 20:57:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.448 20:57:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:42.448 20:57:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.448 20:57:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.448 20:57:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.448 20:57:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:42.448 20:57:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.448 20:57:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.448 20:57:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.448 20:57:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:42.448 20:57:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.448 20:57:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.448 20:57:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.448 20:57:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:42.448 20:57:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.448 20:57:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.448 20:57:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.448 20:57:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:42.448 20:57:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.448 20:57:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.448 20:57:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.448 20:57:09 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.448 20:57:09 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:42.449 20:57:09 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.449 00:06:42.449 real 0m1.341s 00:06:42.449 user 0m1.214s 00:06:42.449 sys 0m0.142s 00:06:42.449 20:57:09 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.449 20:57:09 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:42.449 ************************************ 00:06:42.449 END TEST accel_comp 00:06:42.449 ************************************ 00:06:42.449 20:57:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:42.449 20:57:09 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:42.449 20:57:09 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:42.449 20:57:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.449 20:57:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.449 ************************************ 00:06:42.449 START TEST accel_decomp 00:06:42.449 ************************************ 00:06:42.449 20:57:09 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:42.449 20:57:09 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:42.449 20:57:09 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:42.449 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.449 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.449 20:57:09 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:42.449 20:57:09 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:42.449 20:57:09 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:42.449 20:57:09 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.449 20:57:09 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.449 20:57:09 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.449 20:57:09 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.449 20:57:09 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.449 20:57:09 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:42.449 20:57:09 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:42.449 [2024-07-15 20:57:09.555601] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:42.449 [2024-07-15 20:57:09.555686] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid777761 ] 00:06:42.449 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.449 [2024-07-15 20:57:09.627548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.449 [2024-07-15 20:57:09.699916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.709 20:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:43.647 20:57:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:43.647 20:57:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.647 20:57:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:43.647 20:57:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:43.647 20:57:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:43.647 20:57:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.647 20:57:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:43.647 20:57:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:43.647 20:57:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:43.647 20:57:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.647 20:57:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:43.647 20:57:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:43.647 20:57:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:43.647 20:57:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.647 20:57:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:43.647 20:57:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:43.647 20:57:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:43.647 20:57:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.647 20:57:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:43.647 20:57:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:43.647 20:57:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:43.647 20:57:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.647 20:57:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:43.647 20:57:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:43.647 20:57:10 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.647 20:57:10 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:43.647 20:57:10 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.647 00:06:43.647 real 0m1.345s 00:06:43.647 user 0m1.233s 00:06:43.647 sys 0m0.127s 00:06:43.647 20:57:10 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.647 20:57:10 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:43.647 ************************************ 00:06:43.647 END TEST accel_decomp 00:06:43.647 ************************************ 00:06:43.647 20:57:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:43.647 20:57:10 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:43.647 20:57:10 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:43.647 20:57:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.647 20:57:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.907 ************************************ 00:06:43.907 START TEST accel_decomp_full 00:06:43.907 ************************************ 00:06:43.907 20:57:10 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:43.907 20:57:10 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:43.907 20:57:10 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:43.907 20:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.907 20:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.907 20:57:10 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:43.907 20:57:10 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:43.907 20:57:10 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:43.907 20:57:10 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.907 20:57:10 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.907 20:57:10 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.907 20:57:10 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.907 20:57:10 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.907 20:57:10 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:43.907 20:57:10 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:43.907 [2024-07-15 20:57:10.982727] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:43.907 [2024-07-15 20:57:10.982816] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid777990 ] 00:06:43.907 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.907 [2024-07-15 20:57:11.052115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.907 [2024-07-15 20:57:11.122737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.907 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.908 20:57:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:43.908 20:57:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.908 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.908 20:57:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:45.286 20:57:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:45.286 20:57:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:45.286 20:57:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:45.286 20:57:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:45.286 20:57:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:45.286 20:57:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:45.286 20:57:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:45.286 20:57:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:45.286 20:57:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:45.286 20:57:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:45.286 20:57:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:45.286 20:57:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:45.286 20:57:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:45.286 20:57:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:45.286 20:57:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:45.286 20:57:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:45.286 20:57:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:45.286 20:57:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:45.286 20:57:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:45.286 20:57:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:45.286 20:57:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:45.286 20:57:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:45.286 20:57:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:45.286 20:57:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:45.286 20:57:12 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.286 20:57:12 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:45.286 20:57:12 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.286 00:06:45.286 real 0m1.344s 00:06:45.286 user 0m1.228s 00:06:45.286 sys 0m0.129s 00:06:45.286 20:57:12 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.286 20:57:12 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:45.286 ************************************ 00:06:45.286 END TEST accel_decomp_full 00:06:45.286 ************************************ 00:06:45.286 20:57:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:45.287 20:57:12 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:45.287 20:57:12 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:45.287 20:57:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.287 20:57:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.287 ************************************ 00:06:45.287 START TEST accel_decomp_mcore 00:06:45.287 ************************************ 00:06:45.287 20:57:12 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:45.287 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:45.287 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:45.287 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.287 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.287 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:45.287 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:45.287 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:45.287 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.287 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.287 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.287 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.287 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.287 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:45.287 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:45.287 [2024-07-15 20:57:12.411727] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:45.287 [2024-07-15 20:57:12.411809] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid778234 ] 00:06:45.287 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.287 [2024-07-15 20:57:12.483605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:45.287 [2024-07-15 20:57:12.557055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.287 [2024-07-15 20:57:12.557151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.287 [2024-07-15 20:57:12.557232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:45.287 [2024-07-15 20:57:12.557234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.546 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.547 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.547 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.547 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.547 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:45.547 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.547 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.547 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.547 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:45.547 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.547 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.547 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.547 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:45.547 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.547 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.547 20:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.482 00:06:46.482 real 0m1.353s 00:06:46.482 user 0m4.557s 00:06:46.482 sys 0m0.136s 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.482 20:57:13 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:46.482 ************************************ 00:06:46.482 END TEST accel_decomp_mcore 00:06:46.482 ************************************ 00:06:46.742 20:57:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:46.742 20:57:13 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:46.742 20:57:13 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:46.742 20:57:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.742 20:57:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.742 ************************************ 00:06:46.742 START TEST accel_decomp_full_mcore 00:06:46.742 ************************************ 00:06:46.742 20:57:13 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:46.742 20:57:13 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:46.742 20:57:13 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:46.742 20:57:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.742 20:57:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.742 20:57:13 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:46.742 20:57:13 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:46.742 20:57:13 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:46.742 20:57:13 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.742 20:57:13 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.742 20:57:13 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.742 20:57:13 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.742 20:57:13 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.742 20:57:13 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:46.742 20:57:13 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:46.742 [2024-07-15 20:57:13.848119] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:46.742 [2024-07-15 20:57:13.848201] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid778519 ] 00:06:46.742 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.742 [2024-07-15 20:57:13.919522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:46.742 [2024-07-15 20:57:13.992760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.742 [2024-07-15 20:57:13.992857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.742 [2024-07-15 20:57:13.992917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.742 [2024-07-15 20:57:13.992919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.002 20:57:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.941 00:06:47.941 real 0m1.366s 00:06:47.941 user 0m4.590s 00:06:47.941 sys 0m0.143s 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.941 20:57:15 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:47.941 ************************************ 00:06:47.941 END TEST accel_decomp_full_mcore 00:06:47.941 ************************************ 00:06:48.201 20:57:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:48.201 20:57:15 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:48.201 20:57:15 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:48.201 20:57:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.201 20:57:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.201 ************************************ 00:06:48.201 START TEST accel_decomp_mthread 00:06:48.201 ************************************ 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:48.201 [2024-07-15 20:57:15.295610] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:48.201 [2024-07-15 20:57:15.295692] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid778804 ] 00:06:48.201 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.201 [2024-07-15 20:57:15.365477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.201 [2024-07-15 20:57:15.436074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.201 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.461 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.461 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.461 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.461 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.461 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.461 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:48.461 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.461 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.461 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.461 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:48.461 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.461 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.461 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.461 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:48.461 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.461 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.461 20:57:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.400 20:57:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:49.400 20:57:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.400 20:57:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.400 20:57:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.400 20:57:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:49.400 20:57:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.400 20:57:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.400 20:57:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.400 20:57:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:49.400 20:57:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.400 20:57:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.400 20:57:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.400 20:57:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:49.400 20:57:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.400 20:57:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.400 20:57:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.400 20:57:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:49.400 20:57:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.400 20:57:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.400 20:57:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.400 20:57:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:49.400 20:57:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.400 20:57:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.400 20:57:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.400 20:57:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:49.400 20:57:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.400 20:57:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.400 20:57:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.400 20:57:16 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.400 20:57:16 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:49.400 20:57:16 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.400 00:06:49.400 real 0m1.341s 00:06:49.400 user 0m1.236s 00:06:49.400 sys 0m0.120s 00:06:49.400 20:57:16 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.400 20:57:16 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:49.400 ************************************ 00:06:49.400 END TEST accel_decomp_mthread 00:06:49.400 ************************************ 00:06:49.400 20:57:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:49.400 20:57:16 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:49.400 20:57:16 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:49.400 20:57:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.400 20:57:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.661 ************************************ 00:06:49.661 START TEST accel_decomp_full_mthread 00:06:49.661 ************************************ 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:49.661 [2024-07-15 20:57:16.720693] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:49.661 [2024-07-15 20:57:16.720776] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779094 ] 00:06:49.661 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.661 [2024-07-15 20:57:16.791611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.661 [2024-07-15 20:57:16.863216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.661 20:57:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.041 20:57:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:51.041 20:57:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.041 20:57:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.041 20:57:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.041 20:57:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:51.041 20:57:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.041 20:57:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.041 20:57:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.041 20:57:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:51.041 20:57:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.041 20:57:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.041 20:57:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.041 20:57:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:51.041 20:57:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.041 20:57:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.041 20:57:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.041 20:57:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:51.041 20:57:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.041 20:57:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.041 20:57:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.041 20:57:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:51.041 20:57:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.041 20:57:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.041 20:57:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.041 20:57:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:51.041 20:57:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.041 20:57:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.041 20:57:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.041 20:57:18 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.041 20:57:18 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:51.041 20:57:18 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.041 00:06:51.041 real 0m1.363s 00:06:51.041 user 0m1.233s 00:06:51.041 sys 0m0.143s 00:06:51.041 20:57:18 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.041 20:57:18 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:51.041 ************************************ 00:06:51.041 END TEST accel_decomp_full_mthread 00:06:51.041 ************************************ 00:06:51.041 20:57:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:51.041 20:57:18 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:51.041 20:57:18 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:51.041 20:57:18 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:51.041 20:57:18 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:51.041 20:57:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.041 20:57:18 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.041 20:57:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.041 20:57:18 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.041 20:57:18 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.041 20:57:18 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.041 20:57:18 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.041 20:57:18 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:51.041 20:57:18 accel -- accel/accel.sh@41 -- # jq -r . 00:06:51.041 ************************************ 00:06:51.041 START TEST accel_dif_functional_tests 00:06:51.041 ************************************ 00:06:51.041 20:57:18 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:51.041 [2024-07-15 20:57:18.157348] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:51.041 [2024-07-15 20:57:18.157392] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779374 ] 00:06:51.041 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.041 [2024-07-15 20:57:18.221547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.041 [2024-07-15 20:57:18.293123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.042 [2024-07-15 20:57:18.293221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.042 [2024-07-15 20:57:18.293221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.367 00:06:51.367 00:06:51.367 CUnit - A unit testing framework for C - Version 2.1-3 00:06:51.367 http://cunit.sourceforge.net/ 00:06:51.367 00:06:51.367 00:06:51.367 Suite: accel_dif 00:06:51.367 Test: verify: DIF generated, GUARD check ...passed 00:06:51.367 Test: verify: DIF generated, APPTAG check ...passed 00:06:51.367 Test: verify: DIF generated, REFTAG check ...passed 00:06:51.367 Test: verify: DIF not generated, GUARD check ...[2024-07-15 20:57:18.361884] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:51.367 passed 00:06:51.367 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 20:57:18.361937] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:51.367 passed 00:06:51.367 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 20:57:18.361963] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:51.367 passed 00:06:51.368 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:51.368 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 20:57:18.362014] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:51.368 passed 00:06:51.368 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:51.368 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:51.368 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:51.368 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 20:57:18.362110] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:51.368 passed 00:06:51.368 Test: verify copy: DIF generated, GUARD check ...passed 00:06:51.368 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:51.368 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:51.368 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 20:57:18.362222] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:51.368 passed 00:06:51.368 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 20:57:18.362250] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:51.368 passed 00:06:51.368 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 20:57:18.362274] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:51.368 passed 00:06:51.368 Test: generate copy: DIF generated, GUARD check ...passed 00:06:51.368 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:51.368 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:51.368 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:51.368 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:51.368 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:51.368 Test: generate copy: iovecs-len validate ...[2024-07-15 20:57:18.362449] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:51.368 passed 00:06:51.368 Test: generate copy: buffer alignment validate ...passed 00:06:51.368 00:06:51.368 Run Summary: Type Total Ran Passed Failed Inactive 00:06:51.368 suites 1 1 n/a 0 0 00:06:51.368 tests 26 26 26 0 0 00:06:51.368 asserts 115 115 115 0 n/a 00:06:51.368 00:06:51.368 Elapsed time = 0.002 seconds 00:06:51.368 00:06:51.368 real 0m0.376s 00:06:51.368 user 0m0.554s 00:06:51.368 sys 0m0.139s 00:06:51.368 20:57:18 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.368 20:57:18 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:51.368 ************************************ 00:06:51.368 END TEST accel_dif_functional_tests 00:06:51.368 ************************************ 00:06:51.368 20:57:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:51.368 00:06:51.368 real 0m31.494s 00:06:51.368 user 0m34.614s 00:06:51.368 sys 0m4.998s 00:06:51.368 20:57:18 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.368 20:57:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.368 ************************************ 00:06:51.368 END TEST accel 00:06:51.368 ************************************ 00:06:51.368 20:57:18 -- common/autotest_common.sh@1142 -- # return 0 00:06:51.368 20:57:18 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:51.368 20:57:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.368 20:57:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.368 20:57:18 -- common/autotest_common.sh@10 -- # set +x 00:06:51.627 ************************************ 00:06:51.627 START TEST accel_rpc 00:06:51.627 ************************************ 00:06:51.627 20:57:18 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:51.627 * Looking for test storage... 00:06:51.627 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:06:51.627 20:57:18 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:51.627 20:57:18 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=779505 00:06:51.627 20:57:18 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 779505 00:06:51.627 20:57:18 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:51.627 20:57:18 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 779505 ']' 00:06:51.627 20:57:18 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.627 20:57:18 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:51.627 20:57:18 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.627 20:57:18 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:51.627 20:57:18 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.627 [2024-07-15 20:57:18.757688] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:51.627 [2024-07-15 20:57:18.757750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779505 ] 00:06:51.627 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.627 [2024-07-15 20:57:18.827306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.627 [2024-07-15 20:57:18.904155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.562 20:57:19 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.562 20:57:19 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:52.562 20:57:19 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:52.562 20:57:19 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:52.562 20:57:19 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:52.562 20:57:19 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:52.562 20:57:19 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:52.562 20:57:19 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:52.562 20:57:19 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.562 20:57:19 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.562 ************************************ 00:06:52.562 START TEST accel_assign_opcode 00:06:52.562 ************************************ 00:06:52.562 20:57:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:52.562 20:57:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:52.562 20:57:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.562 20:57:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:52.562 [2024-07-15 20:57:19.614269] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:52.562 20:57:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.562 20:57:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:52.562 20:57:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.562 20:57:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:52.562 [2024-07-15 20:57:19.626279] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:52.562 20:57:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.562 20:57:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:52.562 20:57:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.562 20:57:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:52.562 20:57:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.563 20:57:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:52.563 20:57:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:52.563 20:57:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.563 20:57:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:52.563 20:57:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:52.563 20:57:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.563 software 00:06:52.563 00:06:52.563 real 0m0.240s 00:06:52.563 user 0m0.045s 00:06:52.563 sys 0m0.015s 00:06:52.563 20:57:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.563 20:57:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:52.563 ************************************ 00:06:52.563 END TEST accel_assign_opcode 00:06:52.563 ************************************ 00:06:52.821 20:57:19 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:52.821 20:57:19 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 779505 00:06:52.822 20:57:19 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 779505 ']' 00:06:52.822 20:57:19 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 779505 00:06:52.822 20:57:19 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:52.822 20:57:19 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:52.822 20:57:19 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 779505 00:06:52.822 20:57:19 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:52.822 20:57:19 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:52.822 20:57:19 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 779505' 00:06:52.822 killing process with pid 779505 00:06:52.822 20:57:19 accel_rpc -- common/autotest_common.sh@967 -- # kill 779505 00:06:52.822 20:57:19 accel_rpc -- common/autotest_common.sh@972 -- # wait 779505 00:06:53.081 00:06:53.081 real 0m1.599s 00:06:53.081 user 0m1.642s 00:06:53.081 sys 0m0.466s 00:06:53.081 20:57:20 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.081 20:57:20 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.081 ************************************ 00:06:53.081 END TEST accel_rpc 00:06:53.081 ************************************ 00:06:53.081 20:57:20 -- common/autotest_common.sh@1142 -- # return 0 00:06:53.081 20:57:20 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:53.081 20:57:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:53.081 20:57:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.081 20:57:20 -- common/autotest_common.sh@10 -- # set +x 00:06:53.081 ************************************ 00:06:53.081 START TEST app_cmdline 00:06:53.081 ************************************ 00:06:53.081 20:57:20 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:53.340 * Looking for test storage... 00:06:53.340 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:53.340 20:57:20 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:53.340 20:57:20 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=779920 00:06:53.340 20:57:20 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 779920 00:06:53.340 20:57:20 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:53.340 20:57:20 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 779920 ']' 00:06:53.340 20:57:20 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.340 20:57:20 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.340 20:57:20 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.340 20:57:20 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.340 20:57:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:53.340 [2024-07-15 20:57:20.450034] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:53.340 [2024-07-15 20:57:20.450099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779920 ] 00:06:53.340 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.340 [2024-07-15 20:57:20.519772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.340 [2024-07-15 20:57:20.595507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.275 20:57:21 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.275 20:57:21 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:54.275 20:57:21 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:54.275 { 00:06:54.275 "version": "SPDK v24.09-pre git sha1 cdc37ee83", 00:06:54.275 "fields": { 00:06:54.275 "major": 24, 00:06:54.275 "minor": 9, 00:06:54.275 "patch": 0, 00:06:54.275 "suffix": "-pre", 00:06:54.275 "commit": "cdc37ee83" 00:06:54.275 } 00:06:54.275 } 00:06:54.275 20:57:21 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:54.275 20:57:21 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:54.275 20:57:21 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:54.275 20:57:21 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:54.275 20:57:21 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:54.275 20:57:21 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.275 20:57:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:54.275 20:57:21 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:54.275 20:57:21 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:54.275 20:57:21 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.275 20:57:21 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:54.275 20:57:21 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:54.275 20:57:21 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:54.275 20:57:21 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:54.275 20:57:21 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:54.275 20:57:21 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:54.275 20:57:21 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.275 20:57:21 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:54.275 20:57:21 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.275 20:57:21 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:54.275 20:57:21 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.275 20:57:21 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:54.275 20:57:21 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:06:54.275 20:57:21 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:54.533 request: 00:06:54.533 { 00:06:54.533 "method": "env_dpdk_get_mem_stats", 00:06:54.533 "req_id": 1 00:06:54.533 } 00:06:54.533 Got JSON-RPC error response 00:06:54.533 response: 00:06:54.533 { 00:06:54.533 "code": -32601, 00:06:54.533 "message": "Method not found" 00:06:54.533 } 00:06:54.533 20:57:21 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:54.533 20:57:21 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:54.533 20:57:21 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:54.533 20:57:21 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:54.533 20:57:21 app_cmdline -- app/cmdline.sh@1 -- # killprocess 779920 00:06:54.533 20:57:21 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 779920 ']' 00:06:54.533 20:57:21 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 779920 00:06:54.533 20:57:21 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:54.533 20:57:21 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:54.533 20:57:21 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 779920 00:06:54.533 20:57:21 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:54.533 20:57:21 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:54.533 20:57:21 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 779920' 00:06:54.533 killing process with pid 779920 00:06:54.533 20:57:21 app_cmdline -- common/autotest_common.sh@967 -- # kill 779920 00:06:54.533 20:57:21 app_cmdline -- common/autotest_common.sh@972 -- # wait 779920 00:06:54.792 00:06:54.792 real 0m1.685s 00:06:54.792 user 0m1.966s 00:06:54.792 sys 0m0.467s 00:06:54.792 20:57:22 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.792 20:57:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:54.792 ************************************ 00:06:54.792 END TEST app_cmdline 00:06:54.792 ************************************ 00:06:54.792 20:57:22 -- common/autotest_common.sh@1142 -- # return 0 00:06:54.792 20:57:22 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:54.792 20:57:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:54.792 20:57:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.792 20:57:22 -- common/autotest_common.sh@10 -- # set +x 00:06:55.051 ************************************ 00:06:55.051 START TEST version 00:06:55.051 ************************************ 00:06:55.051 20:57:22 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:55.051 * Looking for test storage... 00:06:55.051 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:55.051 20:57:22 version -- app/version.sh@17 -- # get_header_version major 00:06:55.051 20:57:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:55.051 20:57:22 version -- app/version.sh@14 -- # cut -f2 00:06:55.051 20:57:22 version -- app/version.sh@14 -- # tr -d '"' 00:06:55.051 20:57:22 version -- app/version.sh@17 -- # major=24 00:06:55.051 20:57:22 version -- app/version.sh@18 -- # get_header_version minor 00:06:55.051 20:57:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:55.051 20:57:22 version -- app/version.sh@14 -- # tr -d '"' 00:06:55.051 20:57:22 version -- app/version.sh@14 -- # cut -f2 00:06:55.051 20:57:22 version -- app/version.sh@18 -- # minor=9 00:06:55.051 20:57:22 version -- app/version.sh@19 -- # get_header_version patch 00:06:55.051 20:57:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:55.051 20:57:22 version -- app/version.sh@14 -- # cut -f2 00:06:55.051 20:57:22 version -- app/version.sh@14 -- # tr -d '"' 00:06:55.051 20:57:22 version -- app/version.sh@19 -- # patch=0 00:06:55.051 20:57:22 version -- app/version.sh@20 -- # get_header_version suffix 00:06:55.051 20:57:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:55.051 20:57:22 version -- app/version.sh@14 -- # cut -f2 00:06:55.051 20:57:22 version -- app/version.sh@14 -- # tr -d '"' 00:06:55.051 20:57:22 version -- app/version.sh@20 -- # suffix=-pre 00:06:55.051 20:57:22 version -- app/version.sh@22 -- # version=24.9 00:06:55.051 20:57:22 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:55.051 20:57:22 version -- app/version.sh@28 -- # version=24.9rc0 00:06:55.051 20:57:22 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:55.051 20:57:22 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:55.051 20:57:22 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:55.051 20:57:22 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:55.051 00:06:55.051 real 0m0.185s 00:06:55.051 user 0m0.103s 00:06:55.051 sys 0m0.125s 00:06:55.051 20:57:22 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.051 20:57:22 version -- common/autotest_common.sh@10 -- # set +x 00:06:55.051 ************************************ 00:06:55.051 END TEST version 00:06:55.051 ************************************ 00:06:55.051 20:57:22 -- common/autotest_common.sh@1142 -- # return 0 00:06:55.051 20:57:22 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:55.051 20:57:22 -- spdk/autotest.sh@198 -- # uname -s 00:06:55.051 20:57:22 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:55.051 20:57:22 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:55.051 20:57:22 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:55.051 20:57:22 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:55.051 20:57:22 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:55.051 20:57:22 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:55.051 20:57:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:55.051 20:57:22 -- common/autotest_common.sh@10 -- # set +x 00:06:55.311 20:57:22 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:55.311 20:57:22 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:55.311 20:57:22 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:06:55.311 20:57:22 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:06:55.311 20:57:22 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:06:55.311 20:57:22 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:06:55.311 20:57:22 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:06:55.311 20:57:22 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:06:55.311 20:57:22 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:06:55.311 20:57:22 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:06:55.311 20:57:22 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:06:55.311 20:57:22 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:06:55.311 20:57:22 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:06:55.311 20:57:22 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:06:55.311 20:57:22 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:06:55.311 20:57:22 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:06:55.311 20:57:22 -- spdk/autotest.sh@371 -- # [[ 1 -eq 1 ]] 00:06:55.311 20:57:22 -- spdk/autotest.sh@372 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:55.311 20:57:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:55.311 20:57:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.311 20:57:22 -- common/autotest_common.sh@10 -- # set +x 00:06:55.311 ************************************ 00:06:55.311 START TEST llvm_fuzz 00:06:55.311 ************************************ 00:06:55.311 20:57:22 llvm_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:55.311 * Looking for test storage... 00:06:55.311 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:06:55.311 20:57:22 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:06:55.311 20:57:22 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:06:55.311 20:57:22 llvm_fuzz -- common/autotest_common.sh@546 -- # fuzzers=() 00:06:55.311 20:57:22 llvm_fuzz -- common/autotest_common.sh@546 -- # local fuzzers 00:06:55.311 20:57:22 llvm_fuzz -- common/autotest_common.sh@548 -- # [[ -n '' ]] 00:06:55.311 20:57:22 llvm_fuzz -- common/autotest_common.sh@551 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:06:55.311 20:57:22 llvm_fuzz -- common/autotest_common.sh@552 -- # fuzzers=("${fuzzers[@]##*/}") 00:06:55.311 20:57:22 llvm_fuzz -- common/autotest_common.sh@555 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:06:55.311 20:57:22 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:06:55.311 20:57:22 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/coverage 00:06:55.311 20:57:22 llvm_fuzz -- fuzz/llvm.sh@56 -- # [[ 1 -eq 0 ]] 00:06:55.311 20:57:22 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:55.311 20:57:22 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:55.311 20:57:22 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:55.311 20:57:22 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:55.311 20:57:22 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:55.311 20:57:22 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:55.311 20:57:22 llvm_fuzz -- fuzz/llvm.sh@62 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:55.311 20:57:22 llvm_fuzz -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:55.311 20:57:22 llvm_fuzz -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.311 20:57:22 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:06:55.311 ************************************ 00:06:55.311 START TEST nvmf_llvm_fuzz 00:06:55.311 ************************************ 00:06:55.311 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:55.574 * Looking for test storage... 00:06:55.574 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:55.574 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:06:55.574 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:06:55.574 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:55.574 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:06:55.574 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:55.574 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:55.574 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:06:55.575 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:55.575 #define SPDK_CONFIG_H 00:06:55.575 #define SPDK_CONFIG_APPS 1 00:06:55.575 #define SPDK_CONFIG_ARCH native 00:06:55.575 #undef SPDK_CONFIG_ASAN 00:06:55.575 #undef SPDK_CONFIG_AVAHI 00:06:55.575 #undef SPDK_CONFIG_CET 00:06:55.575 #define SPDK_CONFIG_COVERAGE 1 00:06:55.575 #define SPDK_CONFIG_CROSS_PREFIX 00:06:55.575 #undef SPDK_CONFIG_CRYPTO 00:06:55.575 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:55.575 #undef SPDK_CONFIG_CUSTOMOCF 00:06:55.575 #undef SPDK_CONFIG_DAOS 00:06:55.575 #define SPDK_CONFIG_DAOS_DIR 00:06:55.575 #define SPDK_CONFIG_DEBUG 1 00:06:55.575 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:55.575 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:55.575 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:55.575 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:55.575 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:55.575 #undef SPDK_CONFIG_DPDK_UADK 00:06:55.575 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:55.575 #define SPDK_CONFIG_EXAMPLES 1 00:06:55.575 #undef SPDK_CONFIG_FC 00:06:55.575 #define SPDK_CONFIG_FC_PATH 00:06:55.575 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:55.575 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:55.575 #undef SPDK_CONFIG_FUSE 00:06:55.575 #define SPDK_CONFIG_FUZZER 1 00:06:55.575 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:06:55.575 #undef SPDK_CONFIG_GOLANG 00:06:55.575 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:55.575 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:55.575 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:55.575 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:55.575 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:55.575 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:55.575 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:55.575 #define SPDK_CONFIG_IDXD 1 00:06:55.575 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:55.576 #undef SPDK_CONFIG_IPSEC_MB 00:06:55.576 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:55.576 #define SPDK_CONFIG_ISAL 1 00:06:55.576 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:55.576 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:55.576 #define SPDK_CONFIG_LIBDIR 00:06:55.576 #undef SPDK_CONFIG_LTO 00:06:55.576 #define SPDK_CONFIG_MAX_LCORES 128 00:06:55.576 #define SPDK_CONFIG_NVME_CUSE 1 00:06:55.576 #undef SPDK_CONFIG_OCF 00:06:55.576 #define SPDK_CONFIG_OCF_PATH 00:06:55.576 #define SPDK_CONFIG_OPENSSL_PATH 00:06:55.576 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:55.576 #define SPDK_CONFIG_PGO_DIR 00:06:55.576 #undef SPDK_CONFIG_PGO_USE 00:06:55.576 #define SPDK_CONFIG_PREFIX /usr/local 00:06:55.576 #undef SPDK_CONFIG_RAID5F 00:06:55.576 #undef SPDK_CONFIG_RBD 00:06:55.576 #define SPDK_CONFIG_RDMA 1 00:06:55.576 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:55.576 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:55.576 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:55.576 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:55.576 #undef SPDK_CONFIG_SHARED 00:06:55.576 #undef SPDK_CONFIG_SMA 00:06:55.576 #define SPDK_CONFIG_TESTS 1 00:06:55.576 #undef SPDK_CONFIG_TSAN 00:06:55.576 #define SPDK_CONFIG_UBLK 1 00:06:55.576 #define SPDK_CONFIG_UBSAN 1 00:06:55.576 #undef SPDK_CONFIG_UNIT_TESTS 00:06:55.576 #undef SPDK_CONFIG_URING 00:06:55.576 #define SPDK_CONFIG_URING_PATH 00:06:55.576 #undef SPDK_CONFIG_URING_ZNS 00:06:55.576 #undef SPDK_CONFIG_USDT 00:06:55.576 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:55.576 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:55.576 #define SPDK_CONFIG_VFIO_USER 1 00:06:55.576 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:55.576 #define SPDK_CONFIG_VHOST 1 00:06:55.576 #define SPDK_CONFIG_VIRTIO 1 00:06:55.576 #undef SPDK_CONFIG_VTUNE 00:06:55.576 #define SPDK_CONFIG_VTUNE_DIR 00:06:55.576 #define SPDK_CONFIG_WERROR 1 00:06:55.576 #define SPDK_CONFIG_WPDK_DIR 00:06:55.576 #undef SPDK_CONFIG_XNVME 00:06:55.576 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:55.576 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 1 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 0 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : true 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : 0 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # : 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 0 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # cat 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:55.577 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # export valgrind= 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # valgrind= 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@269 -- # uname -s 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@279 -- # MAKE=make 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@318 -- # [[ -z 780469 ]] 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@318 -- # kill -0 780469 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.oHisCF 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.oHisCF/tests/nvmf /tmp/spdk.oHisCF 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@327 -- # df -T 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=954408960 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4330020864 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=53939654656 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742317568 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=7802662912 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30866448384 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871158784 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=12342484992 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348465152 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=5980160 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30870204416 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871158784 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=954368 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=6174224384 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174228480 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:55.578 * Looking for test storage... 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mount=/ 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # target_space=53939654656 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@381 -- # new_size=10017255424 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:55.578 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@389 -- # return 0 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1687 -- # true 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:06:55.578 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:55.579 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:55.579 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:55.579 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:06:55.579 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:06:55.579 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:06:55.579 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:55.579 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:55.579 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:06:55.579 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:06:55.579 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:06:55.579 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:06:55.579 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:06:55.579 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:06:55.579 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:06:55.579 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:06:55.579 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:06:55.579 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:55.579 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:06:55.579 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:06:55.579 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:55.579 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:55.579 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:55.579 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:06:55.579 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:55.579 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:55.579 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:06:55.579 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:06:55.579 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:55.579 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:06:55.579 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:55.579 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:55.579 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:55.579 20:57:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:06:55.839 [2024-07-15 20:57:22.881766] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:55.839 [2024-07-15 20:57:22.881836] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid780516 ] 00:06:55.839 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.098 [2024-07-15 20:57:23.136586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.098 [2024-07-15 20:57:23.233311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.098 [2024-07-15 20:57:23.293010] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:56.098 [2024-07-15 20:57:23.309315] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:06:56.098 INFO: Running with entropic power schedule (0xFF, 100). 00:06:56.098 INFO: Seed: 1695358244 00:06:56.098 INFO: Loaded 1 modules (358191 inline 8-bit counters): 358191 [0x29b254c, 0x2a09c7b), 00:06:56.098 INFO: Loaded 1 PC tables (358191 PCs): 358191 [0x2a09c80,0x2f80f70), 00:06:56.098 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:56.098 INFO: A corpus is not provided, starting from an empty corpus 00:06:56.098 #2 INITED exec/s: 0 rss: 64Mb 00:06:56.098 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:56.098 This may also happen if the target rejected all inputs we tried so far 00:06:56.098 [2024-07-15 20:57:23.386309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ab) qid:0 cid:4 nsid:abababab cdw10:abababab cdw11:abababab SGL TRANSPORT DATA BLOCK TRANSPORT 0xabababababababab 00:06:56.098 [2024-07-15 20:57:23.386347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.617 NEW_FUNC[1/697]: 0x483e80 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:06:56.617 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:56.617 #14 NEW cov: 11864 ft: 11866 corp: 2/118b lim: 320 exec/s: 0 rss: 70Mb L: 117/117 MS: 2 CrossOver-InsertRepeatedBytes- 00:06:56.617 [2024-07-15 20:57:23.726652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ab) qid:0 cid:4 nsid:abababab cdw10:ab750000 cdw11:abababab SGL TRANSPORT DATA BLOCK TRANSPORT 0xabababababababab 00:06:56.617 [2024-07-15 20:57:23.726701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.617 #20 NEW cov: 11995 ft: 12579 corp: 3/235b lim: 320 exec/s: 0 rss: 70Mb L: 117/117 MS: 1 ChangeBinInt- 00:06:56.617 [2024-07-15 20:57:23.786956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ab) qid:0 cid:4 nsid:abababab cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xabababababababab 00:06:56.617 [2024-07-15 20:57:23.786986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.617 [2024-07-15 20:57:23.787114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ababab00 00:06:56.617 [2024-07-15 20:57:23.787132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.617 [2024-07-15 20:57:23.787266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ab) qid:0 cid:6 nsid:abababab cdw10:abababab cdw11:abababab SGL TRANSPORT DATA BLOCK TRANSPORT 0xabababababababab 00:06:56.617 [2024-07-15 20:57:23.787284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.617 #21 NEW cov: 12021 ft: 13141 corp: 4/427b lim: 320 exec/s: 0 rss: 70Mb L: 192/192 MS: 1 InsertRepeatedBytes- 00:06:56.617 [2024-07-15 20:57:23.846833] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:56.617 [2024-07-15 20:57:23.846863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.617 #22 NEW cov: 12123 ft: 13470 corp: 5/523b lim: 320 exec/s: 0 rss: 70Mb L: 96/192 MS: 1 InsertRepeatedBytes- 00:06:56.617 [2024-07-15 20:57:23.896946] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:56.617 [2024-07-15 20:57:23.896973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.876 #23 NEW cov: 12123 ft: 13630 corp: 6/622b lim: 320 exec/s: 0 rss: 71Mb L: 99/192 MS: 1 CrossOver- 00:06:56.876 [2024-07-15 20:57:23.957255] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ff28ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:56.876 [2024-07-15 20:57:23.957281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.876 #24 NEW cov: 12123 ft: 13692 corp: 7/721b lim: 320 exec/s: 0 rss: 71Mb L: 99/192 MS: 1 ChangeByte- 00:06:56.876 [2024-07-15 20:57:24.017636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ab) qid:0 cid:4 nsid:abababab cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xabababababababab 00:06:56.876 [2024-07-15 20:57:24.017665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.876 [2024-07-15 20:57:24.017823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:56.876 [2024-07-15 20:57:24.017841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.876 [2024-07-15 20:57:24.017951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:ffff cdw10:abababab cdw11:75000000 00:06:56.877 [2024-07-15 20:57:24.017968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.877 #25 NEW cov: 12123 ft: 13821 corp: 8/916b lim: 320 exec/s: 0 rss: 71Mb L: 195/195 MS: 1 CrossOver- 00:06:56.877 [2024-07-15 20:57:24.077582] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:56.877 [2024-07-15 20:57:24.077609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.877 #26 NEW cov: 12123 ft: 13852 corp: 9/1012b lim: 320 exec/s: 0 rss: 71Mb L: 96/195 MS: 1 ChangeByte- 00:06:56.877 [2024-07-15 20:57:24.127649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ab) qid:0 cid:4 nsid:abababab cdw10:abab30ab cdw11:abababab SGL TRANSPORT DATA BLOCK TRANSPORT 0xabababababababab 00:06:56.877 [2024-07-15 20:57:24.127679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.877 #27 NEW cov: 12123 ft: 13895 corp: 10/1129b lim: 320 exec/s: 0 rss: 71Mb L: 117/195 MS: 1 ChangeByte- 00:06:57.136 [2024-07-15 20:57:24.177934] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:57.136 [2024-07-15 20:57:24.177961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.136 #28 NEW cov: 12123 ft: 14001 corp: 11/1226b lim: 320 exec/s: 0 rss: 71Mb L: 97/195 MS: 1 InsertByte- 00:06:57.136 [2024-07-15 20:57:24.238137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ab) qid:0 cid:4 nsid:abababab cdw10:abab30ab cdw11:abababab SGL TRANSPORT DATA BLOCK TRANSPORT 0xabababababababab 00:06:57.136 [2024-07-15 20:57:24.238164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.136 [2024-07-15 20:57:24.238300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ab) qid:0 cid:5 nsid:abababab cdw10:abababab cdw11:abababab SGL TRANSPORT DATA BLOCK TRANSPORT 0xabababababababab 00:06:57.136 [2024-07-15 20:57:24.238320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.136 NEW_FUNC[1/1]: 0x1a7d240 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:57.136 #29 NEW cov: 12146 ft: 14208 corp: 12/1397b lim: 320 exec/s: 0 rss: 71Mb L: 171/195 MS: 1 CopyPart- 00:06:57.136 [2024-07-15 20:57:24.298754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ab) qid:0 cid:4 nsid:abababab cdw10:29292929 cdw11:29292929 SGL TRANSPORT DATA BLOCK TRANSPORT 0x29292929292929ab 00:06:57.136 [2024-07-15 20:57:24.298781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.136 [2024-07-15 20:57:24.298917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (29) qid:0 cid:5 nsid:29292929 cdw10:29292929 cdw11:29292929 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.136 [2024-07-15 20:57:24.298933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.136 [2024-07-15 20:57:24.299070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (29) qid:0 cid:6 nsid:abababab cdw10:abababab cdw11:abababab SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.136 [2024-07-15 20:57:24.299087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.136 NEW_FUNC[1/1]: 0x17bf950 in nvme_get_sgl_unkeyed /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:143 00:06:57.136 #30 NEW cov: 12160 ft: 14762 corp: 13/1619b lim: 320 exec/s: 0 rss: 71Mb L: 222/222 MS: 1 InsertRepeatedBytes- 00:06:57.136 [2024-07-15 20:57:24.348514] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:29292929 SGL TRANSPORT DATA BLOCK TRANSPORT 0x2929292929292929 00:06:57.136 [2024-07-15 20:57:24.348541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.136 #31 NEW cov: 12160 ft: 14835 corp: 14/1715b lim: 320 exec/s: 31 rss: 71Mb L: 96/222 MS: 1 CrossOver- 00:06:57.136 [2024-07-15 20:57:24.398535] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ff28ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:57.136 [2024-07-15 20:57:24.398562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.396 #32 NEW cov: 12160 ft: 14852 corp: 15/1814b lim: 320 exec/s: 32 rss: 72Mb L: 99/222 MS: 1 ShuffleBytes- 00:06:57.396 [2024-07-15 20:57:24.458824] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:57.396 [2024-07-15 20:57:24.458851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.396 #35 NEW cov: 12160 ft: 14926 corp: 16/1937b lim: 320 exec/s: 35 rss: 72Mb L: 123/222 MS: 3 EraseBytes-CMP-CrossOver- DE: "\001\000\000J"- 00:06:57.396 [2024-07-15 20:57:24.508825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ab) qid:0 cid:4 nsid:abababab cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:57.396 [2024-07-15 20:57:24.508850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.396 #36 NEW cov: 12160 ft: 14955 corp: 17/2054b lim: 320 exec/s: 36 rss: 72Mb L: 117/222 MS: 1 CrossOver- 00:06:57.396 [2024-07-15 20:57:24.559564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ab) qid:0 cid:4 nsid:abababab cdw10:29292929 cdw11:29292929 SGL TRANSPORT DATA BLOCK TRANSPORT 0x29292929292929ab 00:06:57.396 [2024-07-15 20:57:24.559590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.396 [2024-07-15 20:57:24.559726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (29) qid:0 cid:5 nsid:29292929 cdw10:29292929 cdw11:29292929 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.396 [2024-07-15 20:57:24.559743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.396 [2024-07-15 20:57:24.559895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (29) qid:0 cid:6 nsid:abababab cdw10:abababab cdw11:abababab SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.396 [2024-07-15 20:57:24.559912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.396 #37 NEW cov: 12160 ft: 14967 corp: 18/2276b lim: 320 exec/s: 37 rss: 72Mb L: 222/222 MS: 1 ShuffleBytes- 00:06:57.396 [2024-07-15 20:57:24.619199] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST CONTROLLED THERMAL MANAGEMENT cid:4 cdw10:10101010 SGL TRANSPORT DATA BLOCK TRANSPORT 0x1010101010101010 00:06:57.396 [2024-07-15 20:57:24.619226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.396 #39 NEW cov: 12160 ft: 14979 corp: 19/2358b lim: 320 exec/s: 39 rss: 72Mb L: 82/222 MS: 2 InsertByte-InsertRepeatedBytes- 00:06:57.396 [2024-07-15 20:57:24.669731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ab) qid:0 cid:4 nsid:abababab cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xabababababababab 00:06:57.396 [2024-07-15 20:57:24.669757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.396 [2024-07-15 20:57:24.669894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:57.396 [2024-07-15 20:57:24.669911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.396 [2024-07-15 20:57:24.670027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:ffff cdw10:abababab cdw11:75000000 00:06:57.396 [2024-07-15 20:57:24.670044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.656 #40 NEW cov: 12160 ft: 14988 corp: 20/2553b lim: 320 exec/s: 40 rss: 72Mb L: 195/222 MS: 1 ShuffleBytes- 00:06:57.656 [2024-07-15 20:57:24.729931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ab) qid:0 cid:4 nsid:abababab cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xabababababababab 00:06:57.656 [2024-07-15 20:57:24.729958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.656 [2024-07-15 20:57:24.730073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ababab00 00:06:57.656 [2024-07-15 20:57:24.730090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.656 [2024-07-15 20:57:24.730228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ab) qid:0 cid:6 nsid:abababab cdw10:abababab cdw11:abababab SGL TRANSPORT DATA BLOCK TRANSPORT 0xabababababababab 00:06:57.656 [2024-07-15 20:57:24.730244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.656 #41 NEW cov: 12160 ft: 15026 corp: 21/2745b lim: 320 exec/s: 41 rss: 72Mb L: 192/222 MS: 1 ChangeBit- 00:06:57.656 [2024-07-15 20:57:24.779737] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffff0affffffffff 00:06:57.656 [2024-07-15 20:57:24.779765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.656 #42 NEW cov: 12160 ft: 15042 corp: 22/2810b lim: 320 exec/s: 42 rss: 72Mb L: 65/222 MS: 1 EraseBytes- 00:06:57.656 [2024-07-15 20:57:24.829901] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:29292929 SGL TRANSPORT DATA BLOCK TRANSPORT 0x2929292929292929 00:06:57.656 [2024-07-15 20:57:24.829932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.656 #43 NEW cov: 12160 ft: 15055 corp: 23/2906b lim: 320 exec/s: 43 rss: 72Mb L: 96/222 MS: 1 ChangeBit- 00:06:57.656 [2024-07-15 20:57:24.890094] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ff28ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:57.656 [2024-07-15 20:57:24.890122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.656 #44 NEW cov: 12160 ft: 15070 corp: 24/3005b lim: 320 exec/s: 44 rss: 72Mb L: 99/222 MS: 1 ShuffleBytes- 00:06:57.915 [2024-07-15 20:57:24.950327] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ff28ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:57.915 [2024-07-15 20:57:24.950358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.915 #45 NEW cov: 12160 ft: 15093 corp: 25/3104b lim: 320 exec/s: 45 rss: 72Mb L: 99/222 MS: 1 ShuffleBytes- 00:06:57.915 [2024-07-15 20:57:25.000458] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:29292929 SGL TRANSPORT DATA BLOCK TRANSPORT 0x2929292929292929 00:06:57.915 [2024-07-15 20:57:25.000484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.915 #51 NEW cov: 12160 ft: 15101 corp: 26/3177b lim: 320 exec/s: 51 rss: 72Mb L: 73/222 MS: 1 EraseBytes- 00:06:57.915 [2024-07-15 20:57:25.050942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ab) qid:0 cid:4 nsid:abababab cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xabababababababab 00:06:57.915 [2024-07-15 20:57:25.050972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.915 [2024-07-15 20:57:25.051089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ababab00 00:06:57.915 [2024-07-15 20:57:25.051110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.915 [2024-07-15 20:57:25.051264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ab) qid:0 cid:6 nsid:abababab cdw10:abababab cdw11:abababab SGL TRANSPORT DATA BLOCK TRANSPORT 0xabababababababab 00:06:57.915 [2024-07-15 20:57:25.051282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.915 [2024-07-15 20:57:25.101336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ab) qid:0 cid:4 nsid:abababab cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xabababababababab 00:06:57.915 [2024-07-15 20:57:25.101364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.915 [2024-07-15 20:57:25.101483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ababab00 00:06:57.915 [2024-07-15 20:57:25.101500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.915 [2024-07-15 20:57:25.101623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.915 [2024-07-15 20:57:25.101640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.915 [2024-07-15 20:57:25.101756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:abababab cdw11:abababab 00:06:57.915 [2024-07-15 20:57:25.101773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.915 #53 NEW cov: 12160 ft: 15257 corp: 27/3471b lim: 320 exec/s: 53 rss: 72Mb L: 294/294 MS: 2 ChangeBit-InsertRepeatedBytes- 00:06:57.915 [2024-07-15 20:57:25.150987] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ff28ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:57.915 [2024-07-15 20:57:25.151015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.915 #54 NEW cov: 12160 ft: 15273 corp: 28/3570b lim: 320 exec/s: 54 rss: 72Mb L: 99/294 MS: 1 ShuffleBytes- 00:06:58.173 [2024-07-15 20:57:25.211476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ab) qid:0 cid:4 nsid:abababab cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xabababababababab 00:06:58.173 [2024-07-15 20:57:25.211503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.173 [2024-07-15 20:57:25.211637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ababab00 00:06:58.173 [2024-07-15 20:57:25.211657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.173 [2024-07-15 20:57:25.211803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ab) qid:0 cid:6 nsid:abababab cdw10:abababab cdw11:abababab SGL TRANSPORT DATA BLOCK TRANSPORT 0xabababababababab 00:06:58.173 [2024-07-15 20:57:25.211820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.173 #55 NEW cov: 12160 ft: 15312 corp: 29/3762b lim: 320 exec/s: 55 rss: 73Mb L: 192/294 MS: 1 PersAutoDict- DE: "\001\000\000J"- 00:06:58.173 [2024-07-15 20:57:25.271246] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:29292929 SGL TRANSPORT DATA BLOCK TRANSPORT 0x2929292929292929 00:06:58.173 [2024-07-15 20:57:25.271273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.173 #56 NEW cov: 12160 ft: 15357 corp: 30/3858b lim: 320 exec/s: 56 rss: 73Mb L: 96/294 MS: 1 ShuffleBytes- 00:06:58.173 [2024-07-15 20:57:25.331572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ab) qid:0 cid:4 nsid:abababab cdw10:abab30ab cdw11:abababab SGL TRANSPORT DATA BLOCK TRANSPORT 0xabababababababab 00:06:58.173 [2024-07-15 20:57:25.331599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.173 [2024-07-15 20:57:25.331752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ab) qid:0 cid:5 nsid:abababab cdw10:abababab cdw11:abababab SGL TRANSPORT DATA BLOCK TRANSPORT 0xabababababababab 00:06:58.173 [2024-07-15 20:57:25.331771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.173 #57 NEW cov: 12160 ft: 15366 corp: 31/4029b lim: 320 exec/s: 28 rss: 73Mb L: 171/294 MS: 1 ShuffleBytes- 00:06:58.173 #57 DONE cov: 12160 ft: 15366 corp: 31/4029b lim: 320 exec/s: 28 rss: 73Mb 00:06:58.173 ###### Recommended dictionary. ###### 00:06:58.173 "\001\000\000J" # Uses: 1 00:06:58.173 ###### End of recommended dictionary. ###### 00:06:58.173 Done 57 runs in 2 second(s) 00:06:58.430 20:57:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:06:58.430 20:57:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:58.430 20:57:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:58.430 20:57:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:06:58.430 20:57:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:06:58.430 20:57:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:58.430 20:57:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:58.430 20:57:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:58.430 20:57:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:06:58.430 20:57:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:58.430 20:57:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:58.430 20:57:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:06:58.430 20:57:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:06:58.430 20:57:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:58.430 20:57:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:06:58.430 20:57:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:58.430 20:57:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:58.430 20:57:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:58.430 20:57:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:06:58.430 [2024-07-15 20:57:25.536183] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:58.430 [2024-07-15 20:57:25.536265] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid781049 ] 00:06:58.430 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.689 [2024-07-15 20:57:25.790550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.689 [2024-07-15 20:57:25.882735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.689 [2024-07-15 20:57:25.941843] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:58.689 [2024-07-15 20:57:25.958139] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:06:58.689 INFO: Running with entropic power schedule (0xFF, 100). 00:06:58.689 INFO: Seed: 48405941 00:06:58.947 INFO: Loaded 1 modules (358191 inline 8-bit counters): 358191 [0x29b254c, 0x2a09c7b), 00:06:58.947 INFO: Loaded 1 PC tables (358191 PCs): 358191 [0x2a09c80,0x2f80f70), 00:06:58.947 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:58.947 INFO: A corpus is not provided, starting from an empty corpus 00:06:58.947 #2 INITED exec/s: 0 rss: 63Mb 00:06:58.947 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:58.947 This may also happen if the target rejected all inputs we tried so far 00:06:58.947 [2024-07-15 20:57:26.003231] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:06:58.947 [2024-07-15 20:57:26.003354] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:06:58.947 [2024-07-15 20:57:26.003469] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:06:58.947 [2024-07-15 20:57:26.003676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.947 [2024-07-15 20:57:26.003705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.947 [2024-07-15 20:57:26.003761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5e5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.947 [2024-07-15 20:57:26.003776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.947 [2024-07-15 20:57:26.003828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:5e5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.947 [2024-07-15 20:57:26.003842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.206 NEW_FUNC[1/697]: 0x484780 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:06:59.206 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:59.206 #3 NEW cov: 11931 ft: 11932 corp: 2/20b lim: 30 exec/s: 0 rss: 70Mb L: 19/19 MS: 1 InsertRepeatedBytes- 00:06:59.206 [2024-07-15 20:57:26.334797] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:06:59.206 [2024-07-15 20:57:26.334962] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:06:59.206 [2024-07-15 20:57:26.335118] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:06:59.206 [2024-07-15 20:57:26.335481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.206 [2024-07-15 20:57:26.335530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.206 [2024-07-15 20:57:26.335659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5d5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.206 [2024-07-15 20:57:26.335683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.206 [2024-07-15 20:57:26.335815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:5e5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.206 [2024-07-15 20:57:26.335838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.206 #4 NEW cov: 12061 ft: 12783 corp: 3/39b lim: 30 exec/s: 0 rss: 70Mb L: 19/19 MS: 1 ChangeByte- 00:06:59.206 [2024-07-15 20:57:26.384712] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10404) > buf size (4096) 00:06:59.206 [2024-07-15 20:57:26.384872] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (41124) > buf size (4096) 00:06:59.206 [2024-07-15 20:57:26.385191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a280028 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.206 [2024-07-15 20:57:26.385222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.206 [2024-07-15 20:57:26.385337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:28280028 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.206 [2024-07-15 20:57:26.385355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.206 #10 NEW cov: 12090 ft: 13260 corp: 4/52b lim: 30 exec/s: 0 rss: 70Mb L: 13/19 MS: 1 InsertRepeatedBytes- 00:06:59.206 [2024-07-15 20:57:26.424823] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10404) > buf size (4096) 00:06:59.206 [2024-07-15 20:57:26.424981] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (41124) > buf size (4096) 00:06:59.206 [2024-07-15 20:57:26.425298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a2800ef cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.206 [2024-07-15 20:57:26.425329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.206 [2024-07-15 20:57:26.425448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:28280028 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.206 [2024-07-15 20:57:26.425468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.206 #11 NEW cov: 12175 ft: 13522 corp: 5/65b lim: 30 exec/s: 0 rss: 70Mb L: 13/19 MS: 1 ChangeByte- 00:06:59.206 [2024-07-15 20:57:26.475079] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5d 00:06:59.206 [2024-07-15 20:57:26.475247] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:06:59.206 [2024-07-15 20:57:26.475400] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:06:59.206 [2024-07-15 20:57:26.475736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.206 [2024-07-15 20:57:26.475766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.206 [2024-07-15 20:57:26.475885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5e5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.206 [2024-07-15 20:57:26.475903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.206 [2024-07-15 20:57:26.476023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:5e5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.206 [2024-07-15 20:57:26.476041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.466 #17 NEW cov: 12175 ft: 13563 corp: 6/84b lim: 30 exec/s: 0 rss: 70Mb L: 19/19 MS: 1 CopyPart- 00:06:59.466 [2024-07-15 20:57:26.525239] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5d 00:06:59.466 [2024-07-15 20:57:26.525403] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:06:59.466 [2024-07-15 20:57:26.525552] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000a5e 00:06:59.466 [2024-07-15 20:57:26.525895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.466 [2024-07-15 20:57:26.525925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.466 [2024-07-15 20:57:26.526041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5e5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.466 [2024-07-15 20:57:26.526061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.466 [2024-07-15 20:57:26.526178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:5e5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.466 [2024-07-15 20:57:26.526197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.466 #18 NEW cov: 12175 ft: 13627 corp: 7/103b lim: 30 exec/s: 0 rss: 70Mb L: 19/19 MS: 1 CrossOver- 00:06:59.466 [2024-07-15 20:57:26.575172] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10404) > buf size (4096) 00:06:59.466 [2024-07-15 20:57:26.575335] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (41124) > buf size (4096) 00:06:59.466 [2024-07-15 20:57:26.575687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a280028 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.466 [2024-07-15 20:57:26.575715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.466 [2024-07-15 20:57:26.575835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:28280028 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.466 [2024-07-15 20:57:26.575852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.466 #19 NEW cov: 12175 ft: 13720 corp: 8/116b lim: 30 exec/s: 0 rss: 71Mb L: 13/19 MS: 1 CopyPart- 00:06:59.466 [2024-07-15 20:57:26.615423] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:06:59.466 [2024-07-15 20:57:26.615593] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:06:59.466 [2024-07-15 20:57:26.615743] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:06:59.466 [2024-07-15 20:57:26.616061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.466 [2024-07-15 20:57:26.616089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.466 [2024-07-15 20:57:26.616204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5e79025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.466 [2024-07-15 20:57:26.616223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.466 [2024-07-15 20:57:26.616354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:5e5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.466 [2024-07-15 20:57:26.616371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.466 #20 NEW cov: 12175 ft: 13805 corp: 9/135b lim: 30 exec/s: 0 rss: 71Mb L: 19/19 MS: 1 ChangeByte- 00:06:59.466 [2024-07-15 20:57:26.655417] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5d 00:06:59.466 [2024-07-15 20:57:26.655591] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:06:59.466 [2024-07-15 20:57:26.655740] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:06:59.466 [2024-07-15 20:57:26.656070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.466 [2024-07-15 20:57:26.656100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.466 [2024-07-15 20:57:26.656224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5e5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.466 [2024-07-15 20:57:26.656242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.466 [2024-07-15 20:57:26.656354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:7a5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.466 [2024-07-15 20:57:26.656371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.466 #26 NEW cov: 12175 ft: 13819 corp: 10/155b lim: 30 exec/s: 0 rss: 71Mb L: 20/20 MS: 1 InsertByte- 00:06:59.466 [2024-07-15 20:57:26.695637] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:06:59.466 [2024-07-15 20:57:26.695806] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:06:59.466 [2024-07-15 20:57:26.695965] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:06:59.466 [2024-07-15 20:57:26.696292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.466 [2024-07-15 20:57:26.696320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.466 [2024-07-15 20:57:26.696446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5e5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.466 [2024-07-15 20:57:26.696466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.466 [2024-07-15 20:57:26.696591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:5e5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.466 [2024-07-15 20:57:26.696608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.466 #27 NEW cov: 12175 ft: 13889 corp: 11/174b lim: 30 exec/s: 0 rss: 71Mb L: 19/20 MS: 1 ChangeByte- 00:06:59.466 [2024-07-15 20:57:26.735637] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10404) > buf size (4096) 00:06:59.466 [2024-07-15 20:57:26.735798] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (41416) > buf size (4096) 00:06:59.466 [2024-07-15 20:57:26.736128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a2800ef cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.466 [2024-07-15 20:57:26.736158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.466 [2024-07-15 20:57:26.736280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:28710028 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.466 [2024-07-15 20:57:26.736297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.726 #28 NEW cov: 12175 ft: 13928 corp: 12/187b lim: 30 exec/s: 0 rss: 71Mb L: 13/20 MS: 1 ChangeByte- 00:06:59.726 [2024-07-15 20:57:26.795923] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10404) > buf size (4096) 00:06:59.726 [2024-07-15 20:57:26.796086] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (41124) > buf size (4096) 00:06:59.726 [2024-07-15 20:57:26.796423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a280028 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.726 [2024-07-15 20:57:26.796457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.726 [2024-07-15 20:57:26.796568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:28280028 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.726 [2024-07-15 20:57:26.796584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.726 #29 NEW cov: 12175 ft: 14002 corp: 13/203b lim: 30 exec/s: 0 rss: 71Mb L: 16/20 MS: 1 CopyPart- 00:06:59.726 [2024-07-15 20:57:26.846165] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5d 00:06:59.726 [2024-07-15 20:57:26.846320] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000eded 00:06:59.726 [2024-07-15 20:57:26.846475] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000eded 00:06:59.726 [2024-07-15 20:57:26.846624] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:06:59.726 [2024-07-15 20:57:26.846946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.726 [2024-07-15 20:57:26.846973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.726 [2024-07-15 20:57:26.847095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5e5e815e cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.726 [2024-07-15 20:57:26.847113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.726 [2024-07-15 20:57:26.847230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:eded81ed cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.726 [2024-07-15 20:57:26.847246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.726 [2024-07-15 20:57:26.847371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:5e5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.726 [2024-07-15 20:57:26.847389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.726 #30 NEW cov: 12175 ft: 14506 corp: 14/231b lim: 30 exec/s: 0 rss: 71Mb L: 28/28 MS: 1 InsertRepeatedBytes- 00:06:59.726 [2024-07-15 20:57:26.886205] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:06:59.726 [2024-07-15 20:57:26.886363] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:06:59.726 [2024-07-15 20:57:26.886528] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:06:59.726 [2024-07-15 20:57:26.886891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.726 [2024-07-15 20:57:26.886920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.726 [2024-07-15 20:57:26.887044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5e5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.726 [2024-07-15 20:57:26.887061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.726 [2024-07-15 20:57:26.887171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:5e5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.726 [2024-07-15 20:57:26.887189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.726 NEW_FUNC[1/1]: 0x1a7d240 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:59.726 #31 NEW cov: 12198 ft: 14567 corp: 15/250b lim: 30 exec/s: 0 rss: 71Mb L: 19/28 MS: 1 CopyPart- 00:06:59.726 [2024-07-15 20:57:26.926209] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa5e 00:06:59.726 [2024-07-15 20:57:26.926365] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:06:59.726 [2024-07-15 20:57:26.926528] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:06:59.726 [2024-07-15 20:57:26.926876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a280028 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.726 [2024-07-15 20:57:26.926904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.726 [2024-07-15 20:57:26.927022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5e5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.726 [2024-07-15 20:57:26.927040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.726 [2024-07-15 20:57:26.927152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:5e5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.727 [2024-07-15 20:57:26.927169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.727 #37 NEW cov: 12198 ft: 14675 corp: 16/269b lim: 30 exec/s: 0 rss: 71Mb L: 19/28 MS: 1 CrossOver- 00:06:59.727 [2024-07-15 20:57:26.976419] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:06:59.727 [2024-07-15 20:57:26.976590] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:06:59.727 [2024-07-15 20:57:26.976734] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100005e5e 00:06:59.727 [2024-07-15 20:57:26.977060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.727 [2024-07-15 20:57:26.977089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.727 [2024-07-15 20:57:26.977211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5e5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.727 [2024-07-15 20:57:26.977229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.727 [2024-07-15 20:57:26.977348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:5e5e815e cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.727 [2024-07-15 20:57:26.977365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.727 #38 NEW cov: 12198 ft: 14686 corp: 17/289b lim: 30 exec/s: 38 rss: 71Mb L: 20/28 MS: 1 InsertByte- 00:06:59.985 [2024-07-15 20:57:27.026233] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:06:59.985 [2024-07-15 20:57:27.026387] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:06:59.985 [2024-07-15 20:57:27.026542] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005d5e 00:06:59.985 [2024-07-15 20:57:27.026695] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:06:59.986 [2024-07-15 20:57:27.027023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.986 [2024-07-15 20:57:27.027051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.986 [2024-07-15 20:57:27.027177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5e5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.986 [2024-07-15 20:57:27.027196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.986 [2024-07-15 20:57:27.027322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:5e5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.986 [2024-07-15 20:57:27.027338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.986 [2024-07-15 20:57:27.027466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:5e5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.986 [2024-07-15 20:57:27.027484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.986 #39 NEW cov: 12198 ft: 14697 corp: 18/318b lim: 30 exec/s: 39 rss: 71Mb L: 29/29 MS: 1 CopyPart- 00:06:59.986 [2024-07-15 20:57:27.066794] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:06:59.986 [2024-07-15 20:57:27.066956] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:06:59.986 [2024-07-15 20:57:27.067096] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (620924) > buf size (4096) 00:06:59.986 [2024-07-15 20:57:27.067570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.986 [2024-07-15 20:57:27.067601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.986 [2024-07-15 20:57:27.067727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5e5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.986 [2024-07-15 20:57:27.067746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.986 [2024-07-15 20:57:27.067862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:5e5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.986 [2024-07-15 20:57:27.067880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.986 [2024-07-15 20:57:27.068007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.986 [2024-07-15 20:57:27.068024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.986 #40 NEW cov: 12215 ft: 14759 corp: 19/346b lim: 30 exec/s: 40 rss: 71Mb L: 28/29 MS: 1 InsertRepeatedBytes- 00:06:59.986 [2024-07-15 20:57:27.106873] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:06:59.986 [2024-07-15 20:57:27.107030] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (96636) > buf size (4096) 00:06:59.986 [2024-07-15 20:57:27.107184] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x5e5e 00:06:59.986 [2024-07-15 20:57:27.107334] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:06:59.986 [2024-07-15 20:57:27.107661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.986 [2024-07-15 20:57:27.107689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.986 [2024-07-15 20:57:27.107809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5e5e0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.986 [2024-07-15 20:57:27.107828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.986 [2024-07-15 20:57:27.107942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.986 [2024-07-15 20:57:27.107957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.986 [2024-07-15 20:57:27.108079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:5e5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.986 [2024-07-15 20:57:27.108099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.986 #41 NEW cov: 12215 ft: 14800 corp: 20/373b lim: 30 exec/s: 41 rss: 71Mb L: 27/29 MS: 1 InsertRepeatedBytes- 00:06:59.986 [2024-07-15 20:57:27.146638] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10404) > buf size (4096) 00:06:59.986 [2024-07-15 20:57:27.146804] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (41124) > buf size (4096) 00:06:59.986 [2024-07-15 20:57:27.146956] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (69908) > buf size (4096) 00:06:59.986 [2024-07-15 20:57:27.147122] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (69908) > buf size (4096) 00:06:59.986 [2024-07-15 20:57:27.147462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a2800ef cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.986 [2024-07-15 20:57:27.147489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.986 [2024-07-15 20:57:27.147611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:28280028 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.986 [2024-07-15 20:57:27.147633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.986 [2024-07-15 20:57:27.147749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:44440044 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.986 [2024-07-15 20:57:27.147766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.986 [2024-07-15 20:57:27.147890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:44440044 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.986 [2024-07-15 20:57:27.147908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.986 #42 NEW cov: 12215 ft: 14844 corp: 21/401b lim: 30 exec/s: 42 rss: 71Mb L: 28/29 MS: 1 InsertRepeatedBytes- 00:06:59.986 [2024-07-15 20:57:27.187037] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:06:59.986 [2024-07-15 20:57:27.187186] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:06:59.986 [2024-07-15 20:57:27.187322] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x5e5e 00:06:59.986 [2024-07-15 20:57:27.187644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.986 [2024-07-15 20:57:27.187674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.986 [2024-07-15 20:57:27.187797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5e5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.986 [2024-07-15 20:57:27.187817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.986 [2024-07-15 20:57:27.187942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:5e5e005e cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.986 [2024-07-15 20:57:27.187959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.986 #43 NEW cov: 12215 ft: 14857 corp: 22/421b lim: 30 exec/s: 43 rss: 71Mb L: 20/29 MS: 1 ChangeByte- 00:06:59.986 [2024-07-15 20:57:27.237080] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5d 00:06:59.986 [2024-07-15 20:57:27.237224] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:06:59.986 [2024-07-15 20:57:27.237375] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000a1a9 00:06:59.986 [2024-07-15 20:57:27.237703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.986 [2024-07-15 20:57:27.237731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.986 [2024-07-15 20:57:27.237843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5e5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.986 [2024-07-15 20:57:27.237860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.986 [2024-07-15 20:57:27.237976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:5e5e81a2 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.986 [2024-07-15 20:57:27.237994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.986 #44 NEW cov: 12215 ft: 14879 corp: 23/440b lim: 30 exec/s: 44 rss: 71Mb L: 19/29 MS: 1 ChangeBinInt- 00:07:00.245 [2024-07-15 20:57:27.277352] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10404) > buf size (4096) 00:07:00.245 [2024-07-15 20:57:27.277520] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:07:00.245 [2024-07-15 20:57:27.277675] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:07:00.245 [2024-07-15 20:57:27.277829] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (96420) > buf size (4096) 00:07:00.245 [2024-07-15 20:57:27.278166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a280028 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.245 [2024-07-15 20:57:27.278193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.245 [2024-07-15 20:57:27.278310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:2828020a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.245 [2024-07-15 20:57:27.278326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.245 [2024-07-15 20:57:27.278459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:5e5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.245 [2024-07-15 20:57:27.278477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.245 [2024-07-15 20:57:27.278592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:5e280028 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.245 [2024-07-15 20:57:27.278609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.245 #45 NEW cov: 12215 ft: 14895 corp: 24/468b lim: 30 exec/s: 45 rss: 71Mb L: 28/29 MS: 1 CrossOver- 00:07:00.245 [2024-07-15 20:57:27.317376] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:07:00.245 [2024-07-15 20:57:27.317541] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005e5e 00:07:00.245 [2024-07-15 20:57:27.317696] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x5e5e 00:07:00.245 [2024-07-15 20:57:27.318022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.245 [2024-07-15 20:57:27.318051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.245 [2024-07-15 20:57:27.318176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5e5e835e cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.245 [2024-07-15 20:57:27.318193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.245 [2024-07-15 20:57:27.318312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:5e5e005e cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.245 [2024-07-15 20:57:27.318329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.245 #46 NEW cov: 12215 ft: 14931 corp: 25/488b lim: 30 exec/s: 46 rss: 71Mb L: 20/29 MS: 1 ChangeBit- 00:07:00.245 [2024-07-15 20:57:27.367646] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5d 00:07:00.245 [2024-07-15 20:57:27.367797] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000eded 00:07:00.245 [2024-07-15 20:57:27.367954] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000ed5e 00:07:00.245 [2024-07-15 20:57:27.368105] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:07:00.245 [2024-07-15 20:57:27.368431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.245 [2024-07-15 20:57:27.368461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.245 [2024-07-15 20:57:27.368591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5e5e815e cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.245 [2024-07-15 20:57:27.368610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.245 [2024-07-15 20:57:27.368730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:eded81ed cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.245 [2024-07-15 20:57:27.368747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.245 [2024-07-15 20:57:27.368859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ed5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.246 [2024-07-15 20:57:27.368876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.246 #47 NEW cov: 12215 ft: 14940 corp: 26/516b lim: 30 exec/s: 47 rss: 72Mb L: 28/29 MS: 1 ShuffleBytes- 00:07:00.246 [2024-07-15 20:57:27.417715] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:07:00.246 [2024-07-15 20:57:27.417870] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:07:00.246 [2024-07-15 20:57:27.418029] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:07:00.246 [2024-07-15 20:57:27.418372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.246 [2024-07-15 20:57:27.418401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.246 [2024-07-15 20:57:27.418518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5e5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.246 [2024-07-15 20:57:27.418536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.246 [2024-07-15 20:57:27.418649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:5e5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.246 [2024-07-15 20:57:27.418666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.246 #48 NEW cov: 12215 ft: 14943 corp: 27/535b lim: 30 exec/s: 48 rss: 72Mb L: 19/29 MS: 1 ShuffleBytes- 00:07:00.246 [2024-07-15 20:57:27.467836] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:07:00.246 [2024-07-15 20:57:27.467994] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005e5e 00:07:00.246 [2024-07-15 20:57:27.468148] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x5e5e 00:07:00.246 [2024-07-15 20:57:27.468475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.246 [2024-07-15 20:57:27.468503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.246 [2024-07-15 20:57:27.468625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5e5e835e cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.246 [2024-07-15 20:57:27.468644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.246 [2024-07-15 20:57:27.468770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:5e5e005e cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.246 [2024-07-15 20:57:27.468786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.246 #49 NEW cov: 12215 ft: 14944 corp: 28/555b lim: 30 exec/s: 49 rss: 72Mb L: 20/29 MS: 1 ShuffleBytes- 00:07:00.246 [2024-07-15 20:57:27.518093] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:07:00.246 [2024-07-15 20:57:27.518244] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:07:00.246 [2024-07-15 20:57:27.518388] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (620924) > buf size (4096) 00:07:00.246 [2024-07-15 20:57:27.518830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.246 [2024-07-15 20:57:27.518856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.246 [2024-07-15 20:57:27.518979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5e5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.246 [2024-07-15 20:57:27.518998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.246 [2024-07-15 20:57:27.519114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:5e5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.246 [2024-07-15 20:57:27.519131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.246 [2024-07-15 20:57:27.519254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.246 [2024-07-15 20:57:27.519271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.505 #50 NEW cov: 12215 ft: 14946 corp: 29/583b lim: 30 exec/s: 50 rss: 72Mb L: 28/29 MS: 1 ShuffleBytes- 00:07:00.505 [2024-07-15 20:57:27.568225] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10404) > buf size (4096) 00:07:00.505 [2024-07-15 20:57:27.568371] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:07:00.505 [2024-07-15 20:57:27.568529] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:07:00.505 [2024-07-15 20:57:27.568685] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (96420) > buf size (4096) 00:07:00.505 [2024-07-15 20:57:27.568999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a280028 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.505 [2024-07-15 20:57:27.569026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.505 [2024-07-15 20:57:27.569149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:2828020a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.505 [2024-07-15 20:57:27.569169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.505 [2024-07-15 20:57:27.569287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:5e5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.505 [2024-07-15 20:57:27.569303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.505 [2024-07-15 20:57:27.569424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:5e280028 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.505 [2024-07-15 20:57:27.569445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.505 #51 NEW cov: 12215 ft: 14957 corp: 30/611b lim: 30 exec/s: 51 rss: 72Mb L: 28/29 MS: 1 CMP- DE: "\000\000\000\000"- 00:07:00.505 [2024-07-15 20:57:27.618278] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:07:00.505 [2024-07-15 20:57:27.618439] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:07:00.505 [2024-07-15 20:57:27.618616] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (620924) > buf size (4096) 00:07:00.505 [2024-07-15 20:57:27.618960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.505 [2024-07-15 20:57:27.618985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.505 [2024-07-15 20:57:27.619110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5e79025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.505 [2024-07-15 20:57:27.619126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.505 [2024-07-15 20:57:27.619242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:5e5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.505 [2024-07-15 20:57:27.619258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.505 #52 NEW cov: 12215 ft: 14992 corp: 31/630b lim: 30 exec/s: 52 rss: 72Mb L: 19/29 MS: 1 ChangeBit- 00:07:00.505 [2024-07-15 20:57:27.668361] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10404) > buf size (4096) 00:07:00.505 [2024-07-15 20:57:27.668510] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (41124) > buf size (4096) 00:07:00.505 [2024-07-15 20:57:27.668869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a280028 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.505 [2024-07-15 20:57:27.668898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.505 [2024-07-15 20:57:27.669021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:28280028 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.505 [2024-07-15 20:57:27.669039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.505 #53 NEW cov: 12215 ft: 15004 corp: 32/646b lim: 30 exec/s: 53 rss: 72Mb L: 16/29 MS: 1 ShuffleBytes- 00:07:00.505 [2024-07-15 20:57:27.708490] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (125092) > buf size (4096) 00:07:00.505 [2024-07-15 20:57:27.708667] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (41416) > buf size (4096) 00:07:00.505 [2024-07-15 20:57:27.709025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:7a2800ef cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.505 [2024-07-15 20:57:27.709054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.505 [2024-07-15 20:57:27.709174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:28710028 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.505 [2024-07-15 20:57:27.709193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.505 #54 NEW cov: 12215 ft: 15036 corp: 33/659b lim: 30 exec/s: 54 rss: 72Mb L: 13/29 MS: 1 ChangeByte- 00:07:00.505 [2024-07-15 20:57:27.758805] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5d 00:07:00.505 [2024-07-15 20:57:27.758959] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000eded 00:07:00.506 [2024-07-15 20:57:27.759112] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000eded 00:07:00.506 [2024-07-15 20:57:27.759253] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:07:00.506 [2024-07-15 20:57:27.759591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:5e0a025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.506 [2024-07-15 20:57:27.759618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.506 [2024-07-15 20:57:27.759743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5e5e815e cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.506 [2024-07-15 20:57:27.759762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.506 [2024-07-15 20:57:27.759888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:eded81ed cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.506 [2024-07-15 20:57:27.759904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.506 [2024-07-15 20:57:27.760020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:5e5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.506 [2024-07-15 20:57:27.760036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.506 #55 NEW cov: 12215 ft: 15051 corp: 34/687b lim: 30 exec/s: 55 rss: 72Mb L: 28/29 MS: 1 ShuffleBytes- 00:07:00.765 [2024-07-15 20:57:27.798950] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:07:00.765 [2024-07-15 20:57:27.799108] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005e5e 00:07:00.765 [2024-07-15 20:57:27.799255] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x5e5e 00:07:00.765 [2024-07-15 20:57:27.799402] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:00.765 [2024-07-15 20:57:27.799756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.765 [2024-07-15 20:57:27.799787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.765 [2024-07-15 20:57:27.799903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5e5e835e cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.765 [2024-07-15 20:57:27.799922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.765 [2024-07-15 20:57:27.800048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:5e5e005e cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.765 [2024-07-15 20:57:27.800066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.765 [2024-07-15 20:57:27.800182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:5eff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.765 [2024-07-15 20:57:27.800201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.765 #56 NEW cov: 12215 ft: 15060 corp: 35/714b lim: 30 exec/s: 56 rss: 72Mb L: 27/29 MS: 1 InsertRepeatedBytes- 00:07:00.765 [2024-07-15 20:57:27.848737] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:07:00.765 [2024-07-15 20:57:27.849093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.765 [2024-07-15 20:57:27.849123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.765 #57 NEW cov: 12215 ft: 15469 corp: 36/725b lim: 30 exec/s: 57 rss: 72Mb L: 11/29 MS: 1 EraseBytes- 00:07:00.765 [2024-07-15 20:57:27.899129] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa5e 00:07:00.765 [2024-07-15 20:57:27.899296] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:07:00.765 [2024-07-15 20:57:27.899453] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:07:00.765 [2024-07-15 20:57:27.899794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a280028 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.765 [2024-07-15 20:57:27.899827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.765 [2024-07-15 20:57:27.899945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5e5e025c cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.765 [2024-07-15 20:57:27.899963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.765 [2024-07-15 20:57:27.900082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:5e5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.765 [2024-07-15 20:57:27.900101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.765 #58 NEW cov: 12215 ft: 15471 corp: 37/744b lim: 30 exec/s: 58 rss: 73Mb L: 19/29 MS: 1 ChangeBit- 00:07:00.765 [2024-07-15 20:57:27.949300] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:07:00.765 [2024-07-15 20:57:27.949469] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x5e 00:07:00.765 [2024-07-15 20:57:27.949625] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005e5e 00:07:00.765 [2024-07-15 20:57:27.950015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.765 [2024-07-15 20:57:27.950045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.765 [2024-07-15 20:57:27.950170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5e000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.765 [2024-07-15 20:57:27.950192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.765 [2024-07-15 20:57:27.950310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:5e5e025e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.765 [2024-07-15 20:57:27.950328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.765 #59 NEW cov: 12215 ft: 15557 corp: 38/767b lim: 30 exec/s: 59 rss: 73Mb L: 23/29 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:07:00.765 [2024-07-15 20:57:27.989277] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10404) > buf size (4096) 00:07:00.765 [2024-07-15 20:57:27.989432] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (41124) > buf size (4096) 00:07:00.765 [2024-07-15 20:57:27.989812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a280028 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.765 [2024-07-15 20:57:27.989842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.765 [2024-07-15 20:57:27.989961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:28280028 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.766 [2024-07-15 20:57:27.989978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.766 #60 NEW cov: 12215 ft: 15565 corp: 39/780b lim: 30 exec/s: 30 rss: 73Mb L: 13/29 MS: 1 EraseBytes- 00:07:00.766 #60 DONE cov: 12215 ft: 15565 corp: 39/780b lim: 30 exec/s: 30 rss: 73Mb 00:07:00.766 ###### Recommended dictionary. ###### 00:07:00.766 "\000\000\000\000" # Uses: 1 00:07:00.766 ###### End of recommended dictionary. ###### 00:07:00.766 Done 60 runs in 2 second(s) 00:07:01.025 20:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:07:01.025 20:57:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:01.025 20:57:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:01.025 20:57:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:01.025 20:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:07:01.025 20:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:01.025 20:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:01.025 20:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:01.025 20:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:07:01.025 20:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:01.025 20:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:01.025 20:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:07:01.025 20:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:07:01.025 20:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:01.025 20:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:07:01.025 20:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:01.025 20:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:01.025 20:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:01.025 20:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:07:01.025 [2024-07-15 20:57:28.180900] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:01.025 [2024-07-15 20:57:28.180964] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid781485 ] 00:07:01.025 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.284 [2024-07-15 20:57:28.432437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.284 [2024-07-15 20:57:28.523355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.543 [2024-07-15 20:57:28.582240] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:01.543 [2024-07-15 20:57:28.598547] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:07:01.543 INFO: Running with entropic power schedule (0xFF, 100). 00:07:01.543 INFO: Seed: 2687394856 00:07:01.543 INFO: Loaded 1 modules (358191 inline 8-bit counters): 358191 [0x29b254c, 0x2a09c7b), 00:07:01.543 INFO: Loaded 1 PC tables (358191 PCs): 358191 [0x2a09c80,0x2f80f70), 00:07:01.543 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:01.543 INFO: A corpus is not provided, starting from an empty corpus 00:07:01.543 #2 INITED exec/s: 0 rss: 63Mb 00:07:01.543 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:01.543 This may also happen if the target rejected all inputs we tried so far 00:07:01.543 [2024-07-15 20:57:28.646373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.543 [2024-07-15 20:57:28.646405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.543 [2024-07-15 20:57:28.646438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.543 [2024-07-15 20:57:28.646467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.543 [2024-07-15 20:57:28.646503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.543 [2024-07-15 20:57:28.646519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.801 NEW_FUNC[1/696]: 0x487230 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:07:01.801 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:01.801 #8 NEW cov: 11887 ft: 11888 corp: 2/27b lim: 35 exec/s: 0 rss: 70Mb L: 26/26 MS: 1 InsertRepeatedBytes- 00:07:01.801 [2024-07-15 20:57:28.997279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.801 [2024-07-15 20:57:28.997321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.802 [2024-07-15 20:57:28.997355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.802 [2024-07-15 20:57:28.997372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.802 [2024-07-15 20:57:28.997401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:dfdf00df cdw11:2b00df00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.802 [2024-07-15 20:57:28.997418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.802 #14 NEW cov: 12017 ft: 12417 corp: 3/53b lim: 35 exec/s: 0 rss: 70Mb L: 26/26 MS: 1 CMP- DE: "\000+D\307N\016\232\210"- 00:07:01.802 [2024-07-15 20:57:29.077367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.802 [2024-07-15 20:57:29.077398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.802 [2024-07-15 20:57:29.077431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:2cdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.802 [2024-07-15 20:57:29.077455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.802 [2024-07-15 20:57:29.077485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.802 [2024-07-15 20:57:29.077502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.061 #15 NEW cov: 12023 ft: 12683 corp: 4/79b lim: 35 exec/s: 0 rss: 70Mb L: 26/26 MS: 1 ChangeByte- 00:07:02.061 [2024-07-15 20:57:29.127387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2e2e00cf cdw11:2e002e2e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.061 [2024-07-15 20:57:29.127419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.061 #19 NEW cov: 12108 ft: 13377 corp: 5/90b lim: 35 exec/s: 0 rss: 70Mb L: 11/26 MS: 4 CrossOver-ChangeByte-ChangeByte-InsertRepeatedBytes- 00:07:02.061 [2024-07-15 20:57:29.187646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.061 [2024-07-15 20:57:29.187677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.061 [2024-07-15 20:57:29.187710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:dfdf00df cdw11:ff00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.061 [2024-07-15 20:57:29.187726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.061 [2024-07-15 20:57:29.187760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:dfdf00df cdw11:2b00df00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.061 [2024-07-15 20:57:29.187776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.061 #20 NEW cov: 12108 ft: 13478 corp: 6/116b lim: 35 exec/s: 0 rss: 70Mb L: 26/26 MS: 1 ChangeBit- 00:07:02.061 [2024-07-15 20:57:29.267827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.061 [2024-07-15 20:57:29.267858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.061 [2024-07-15 20:57:29.267891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:2cdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.061 [2024-07-15 20:57:29.267908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.061 #21 NEW cov: 12108 ft: 13766 corp: 7/132b lim: 35 exec/s: 0 rss: 70Mb L: 16/26 MS: 1 EraseBytes- 00:07:02.061 [2024-07-15 20:57:29.348085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.061 [2024-07-15 20:57:29.348117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.061 [2024-07-15 20:57:29.348150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:dfdf00df cdw11:ff00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.061 [2024-07-15 20:57:29.348167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.061 [2024-07-15 20:57:29.348196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffdf00df cdw11:2b00df00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.061 [2024-07-15 20:57:29.348213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.319 #22 NEW cov: 12108 ft: 13817 corp: 8/158b lim: 35 exec/s: 0 rss: 70Mb L: 26/26 MS: 1 ChangeBit- 00:07:02.319 [2024-07-15 20:57:29.428243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.319 [2024-07-15 20:57:29.428276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.319 [2024-07-15 20:57:29.428310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.319 [2024-07-15 20:57:29.428327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.319 #23 NEW cov: 12108 ft: 13904 corp: 9/176b lim: 35 exec/s: 0 rss: 71Mb L: 18/26 MS: 1 CrossOver- 00:07:02.319 [2024-07-15 20:57:29.508464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.319 [2024-07-15 20:57:29.508496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.319 [2024-07-15 20:57:29.508529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.319 [2024-07-15 20:57:29.508546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.319 [2024-07-15 20:57:29.508575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:dfdf00df cdw11:2b00df00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.319 [2024-07-15 20:57:29.508595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.319 NEW_FUNC[1/1]: 0x1a7d240 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:02.319 #24 NEW cov: 12125 ft: 13939 corp: 10/202b lim: 35 exec/s: 0 rss: 71Mb L: 26/26 MS: 1 ShuffleBytes- 00:07:02.319 [2024-07-15 20:57:29.558588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.319 [2024-07-15 20:57:29.558619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.319 [2024-07-15 20:57:29.558653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0ed40058 cdw11:00004d7f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.319 [2024-07-15 20:57:29.558669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.320 [2024-07-15 20:57:29.558698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.320 [2024-07-15 20:57:29.558714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.320 #25 NEW cov: 12125 ft: 14059 corp: 11/228b lim: 35 exec/s: 0 rss: 71Mb L: 26/26 MS: 1 CMP- DE: "\214X\016\324M\177\000\000"- 00:07:02.320 [2024-07-15 20:57:29.608729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.320 [2024-07-15 20:57:29.608760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.320 [2024-07-15 20:57:29.608794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0ed40058 cdw11:00004d7f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.320 [2024-07-15 20:57:29.608811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.320 [2024-07-15 20:57:29.608840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:df0000df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.320 [2024-07-15 20:57:29.608856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.577 #26 NEW cov: 12125 ft: 14090 corp: 12/254b lim: 35 exec/s: 26 rss: 71Mb L: 26/26 MS: 1 CopyPart- 00:07:02.577 [2024-07-15 20:57:29.688894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfdf00df cdw11:df00d7df SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.577 [2024-07-15 20:57:29.688926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.577 [2024-07-15 20:57:29.688958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:dfdf00df cdw11:ff00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.577 [2024-07-15 20:57:29.688973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.577 [2024-07-15 20:57:29.689002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffdf00df cdw11:2b00df00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.577 [2024-07-15 20:57:29.689018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.577 #27 NEW cov: 12125 ft: 14119 corp: 13/280b lim: 35 exec/s: 27 rss: 71Mb L: 26/26 MS: 1 ChangeBit- 00:07:02.577 [2024-07-15 20:57:29.739122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.577 [2024-07-15 20:57:29.739153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.577 [2024-07-15 20:57:29.739192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:2cdf00df cdw11:3500dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.577 [2024-07-15 20:57:29.739208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.577 [2024-07-15 20:57:29.739235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:35350035 cdw11:35003535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.577 [2024-07-15 20:57:29.739250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.577 [2024-07-15 20:57:29.739277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:35350035 cdw11:35003535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.577 [2024-07-15 20:57:29.739292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.577 [2024-07-15 20:57:29.739319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:35350035 cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.577 [2024-07-15 20:57:29.739334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:02.577 #28 NEW cov: 12125 ft: 14671 corp: 14/315b lim: 35 exec/s: 28 rss: 71Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:02.577 [2024-07-15 20:57:29.819185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfdf0025 cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.577 [2024-07-15 20:57:29.819215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.577 [2024-07-15 20:57:29.819246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.577 [2024-07-15 20:57:29.819261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.837 #29 NEW cov: 12125 ft: 14735 corp: 15/334b lim: 35 exec/s: 29 rss: 71Mb L: 19/35 MS: 1 InsertByte- 00:07:02.837 [2024-07-15 20:57:29.899554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.837 [2024-07-15 20:57:29.899585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.837 [2024-07-15 20:57:29.899616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.837 [2024-07-15 20:57:29.899632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.837 [2024-07-15 20:57:29.899659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:dfdf00df cdw11:df002cdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.837 [2024-07-15 20:57:29.899675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.837 [2024-07-15 20:57:29.899702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.837 [2024-07-15 20:57:29.899718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.837 [2024-07-15 20:57:29.899744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.837 [2024-07-15 20:57:29.899758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:02.837 #30 NEW cov: 12125 ft: 14787 corp: 16/369b lim: 35 exec/s: 30 rss: 71Mb L: 35/35 MS: 1 CopyPart- 00:07:02.837 [2024-07-15 20:57:29.960209] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:02.837 [2024-07-15 20:57:29.960599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.837 [2024-07-15 20:57:29.960629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.837 [2024-07-15 20:57:29.960682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.837 [2024-07-15 20:57:29.960696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.837 [2024-07-15 20:57:29.960746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:3b0035a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.837 [2024-07-15 20:57:29.960761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.837 [2024-07-15 20:57:29.960812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.837 [2024-07-15 20:57:29.960826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.837 [2024-07-15 20:57:29.960876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.837 [2024-07-15 20:57:29.960890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:02.837 #31 NEW cov: 12134 ft: 14930 corp: 17/404b lim: 35 exec/s: 31 rss: 71Mb L: 35/35 MS: 1 CMP- DE: "\000\000\000\0005\241;\021"- 00:07:02.837 [2024-07-15 20:57:30.010187] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:02.837 [2024-07-15 20:57:30.010391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfdf00df cdw11:0000df00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.837 [2024-07-15 20:57:30.010417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.837 [2024-07-15 20:57:30.010467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:df000080 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.837 [2024-07-15 20:57:30.010484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.837 #32 NEW cov: 12134 ft: 14996 corp: 18/420b lim: 35 exec/s: 32 rss: 71Mb L: 16/35 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\200"- 00:07:02.837 [2024-07-15 20:57:30.050516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfdf0025 cdw11:2100dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.837 [2024-07-15 20:57:30.050547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.837 [2024-07-15 20:57:30.050600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.837 [2024-07-15 20:57:30.050615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.837 #33 NEW cov: 12134 ft: 15054 corp: 19/440b lim: 35 exec/s: 33 rss: 71Mb L: 20/35 MS: 1 InsertByte- 00:07:02.837 [2024-07-15 20:57:30.100754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.837 [2024-07-15 20:57:30.100782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.837 [2024-07-15 20:57:30.100834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:dfdf00df cdw11:ff00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.837 [2024-07-15 20:57:30.100851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.837 [2024-07-15 20:57:30.100904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffdf00df cdw11:2b00df00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.837 [2024-07-15 20:57:30.100918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.837 #34 NEW cov: 12134 ft: 15209 corp: 20/466b lim: 35 exec/s: 34 rss: 71Mb L: 26/35 MS: 1 CrossOver- 00:07:03.096 [2024-07-15 20:57:30.141064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.096 [2024-07-15 20:57:30.141091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.096 [2024-07-15 20:57:30.141144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.096 [2024-07-15 20:57:30.141158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.096 [2024-07-15 20:57:30.141210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:dfdf00df cdw11:df002cdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.096 [2024-07-15 20:57:30.141223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.096 [2024-07-15 20:57:30.141273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:dfdf00df cdw11:df00dadf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.096 [2024-07-15 20:57:30.141287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.096 [2024-07-15 20:57:30.141337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.096 [2024-07-15 20:57:30.141351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:03.096 #35 NEW cov: 12134 ft: 15227 corp: 21/501b lim: 35 exec/s: 35 rss: 71Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:03.096 [2024-07-15 20:57:30.180805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfdf0025 cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.096 [2024-07-15 20:57:30.180830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.096 [2024-07-15 20:57:30.180884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.096 [2024-07-15 20:57:30.180898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.096 #36 NEW cov: 12134 ft: 15255 corp: 22/520b lim: 35 exec/s: 36 rss: 71Mb L: 19/35 MS: 1 ChangeBit- 00:07:03.096 [2024-07-15 20:57:30.221041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.096 [2024-07-15 20:57:30.221066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.096 [2024-07-15 20:57:30.221118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:23df00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.096 [2024-07-15 20:57:30.221132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.096 [2024-07-15 20:57:30.221184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:dfff00df cdw11:0000dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.096 [2024-07-15 20:57:30.221201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.096 #37 NEW cov: 12134 ft: 15266 corp: 23/547b lim: 35 exec/s: 37 rss: 71Mb L: 27/35 MS: 1 InsertByte- 00:07:03.096 [2024-07-15 20:57:30.261050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:df8a00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.096 [2024-07-15 20:57:30.261076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.096 [2024-07-15 20:57:30.261130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.096 [2024-07-15 20:57:30.261144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.096 #38 NEW cov: 12134 ft: 15295 corp: 24/566b lim: 35 exec/s: 38 rss: 71Mb L: 19/35 MS: 1 InsertByte- 00:07:03.096 [2024-07-15 20:57:30.301394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.096 [2024-07-15 20:57:30.301420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.096 [2024-07-15 20:57:30.301473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:dfdf00df cdw11:ff00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.096 [2024-07-15 20:57:30.301487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.096 [2024-07-15 20:57:30.301540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00df cdw11:df00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.096 [2024-07-15 20:57:30.301554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.096 [2024-07-15 20:57:30.301605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:002b00df cdw11:4e0044c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.096 [2024-07-15 20:57:30.301618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.096 #39 NEW cov: 12134 ft: 15324 corp: 25/596b lim: 35 exec/s: 39 rss: 71Mb L: 30/35 MS: 1 InsertRepeatedBytes- 00:07:03.097 [2024-07-15 20:57:30.341515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.097 [2024-07-15 20:57:30.341541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.097 [2024-07-15 20:57:30.341592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:dfdf00df cdw11:ff00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.097 [2024-07-15 20:57:30.341606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.097 [2024-07-15 20:57:30.341657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:dfff0021 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.097 [2024-07-15 20:57:30.341671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.097 [2024-07-15 20:57:30.341723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:df0000df cdw11:c7002b44 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.097 [2024-07-15 20:57:30.341737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.097 #40 NEW cov: 12134 ft: 15341 corp: 26/627b lim: 35 exec/s: 40 rss: 71Mb L: 31/35 MS: 1 InsertByte- 00:07:03.356 [2024-07-15 20:57:30.391507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.356 [2024-07-15 20:57:30.391537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.356 [2024-07-15 20:57:30.391591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.356 [2024-07-15 20:57:30.391605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.356 [2024-07-15 20:57:30.391656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:dfdf00df cdw11:35003535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.356 [2024-07-15 20:57:30.391670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.356 #41 NEW cov: 12134 ft: 15353 corp: 27/650b lim: 35 exec/s: 41 rss: 71Mb L: 23/35 MS: 1 CrossOver- 00:07:03.356 [2024-07-15 20:57:30.431882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.356 [2024-07-15 20:57:30.431908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.356 [2024-07-15 20:57:30.431962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:df0000df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.356 [2024-07-15 20:57:30.431976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.356 [2024-07-15 20:57:30.432028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:dfdf00df cdw11:df00ffdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.356 [2024-07-15 20:57:30.432043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.356 [2024-07-15 20:57:30.432092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.356 [2024-07-15 20:57:30.432105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.356 [2024-07-15 20:57:30.432157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:44c7002b cdw11:9a004e0e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.356 [2024-07-15 20:57:30.432171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:03.356 #42 NEW cov: 12134 ft: 15386 corp: 28/685b lim: 35 exec/s: 42 rss: 71Mb L: 35/35 MS: 1 CrossOver- 00:07:03.356 [2024-07-15 20:57:30.471871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.356 [2024-07-15 20:57:30.471897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.356 [2024-07-15 20:57:30.471950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:23df00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.356 [2024-07-15 20:57:30.471963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.356 [2024-07-15 20:57:30.472014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:dfff00df cdw11:0000dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.356 [2024-07-15 20:57:30.472028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.356 [2024-07-15 20:57:30.472078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:c74e0044 cdw11:9a000e8b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.356 [2024-07-15 20:57:30.472091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.356 #43 NEW cov: 12134 ft: 15399 corp: 29/713b lim: 35 exec/s: 43 rss: 71Mb L: 28/35 MS: 1 InsertByte- 00:07:03.356 [2024-07-15 20:57:30.521831] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:03.356 [2024-07-15 20:57:30.522040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:df8c00df cdw11:d400580e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.356 [2024-07-15 20:57:30.522066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.356 [2024-07-15 20:57:30.522122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0000007f cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.356 [2024-07-15 20:57:30.522137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.356 [2024-07-15 20:57:30.522190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:580e008c cdw11:7f00d44d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.356 [2024-07-15 20:57:30.522204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.356 [2024-07-15 20:57:30.522254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:dfdf0000 cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.356 [2024-07-15 20:57:30.522270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.356 #44 NEW cov: 12141 ft: 15411 corp: 30/747b lim: 35 exec/s: 44 rss: 71Mb L: 34/35 MS: 1 PersAutoDict- DE: "\214X\016\324M\177\000\000"- 00:07:03.356 [2024-07-15 20:57:30.561974] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:03.356 [2024-07-15 20:57:30.562269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.356 [2024-07-15 20:57:30.562295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.356 [2024-07-15 20:57:30.562347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:df0000df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.356 [2024-07-15 20:57:30.562362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.356 [2024-07-15 20:57:30.562413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:dfdf00df cdw11:0000ff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.356 [2024-07-15 20:57:30.562427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.356 [2024-07-15 20:57:30.562481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:df000080 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.356 [2024-07-15 20:57:30.562498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.356 [2024-07-15 20:57:30.562549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:44c7002b cdw11:9a004e0e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.356 [2024-07-15 20:57:30.562563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:03.356 #45 NEW cov: 12141 ft: 15428 corp: 31/782b lim: 35 exec/s: 45 rss: 71Mb L: 35/35 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\200"- 00:07:03.356 [2024-07-15 20:57:30.612159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.356 [2024-07-15 20:57:30.612185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.356 [2024-07-15 20:57:30.612240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:23df00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.356 [2024-07-15 20:57:30.612254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.356 [2024-07-15 20:57:30.612303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:dfff00df cdw11:8c00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.356 [2024-07-15 20:57:30.612318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.356 #46 NEW cov: 12141 ft: 15457 corp: 32/809b lim: 35 exec/s: 23 rss: 71Mb L: 27/35 MS: 1 PersAutoDict- DE: "\214X\016\324M\177\000\000"- 00:07:03.356 #46 DONE cov: 12141 ft: 15457 corp: 32/809b lim: 35 exec/s: 23 rss: 71Mb 00:07:03.356 ###### Recommended dictionary. ###### 00:07:03.356 "\000+D\307N\016\232\210" # Uses: 0 00:07:03.356 "\214X\016\324M\177\000\000" # Uses: 2 00:07:03.356 "\000\000\000\0005\241;\021" # Uses: 0 00:07:03.356 "\000\000\000\000\000\000\000\200" # Uses: 1 00:07:03.356 ###### End of recommended dictionary. ###### 00:07:03.356 Done 46 runs in 2 second(s) 00:07:03.615 20:57:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:07:03.615 20:57:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:03.615 20:57:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:03.615 20:57:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:03.615 20:57:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:07:03.615 20:57:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:03.615 20:57:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:03.615 20:57:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:03.615 20:57:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:07:03.615 20:57:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:03.615 20:57:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:03.615 20:57:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:07:03.615 20:57:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:07:03.615 20:57:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:03.615 20:57:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:07:03.615 20:57:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:03.615 20:57:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:03.615 20:57:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:03.615 20:57:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:07:03.615 [2024-07-15 20:57:30.801146] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:03.615 [2024-07-15 20:57:30.801218] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid781871 ] 00:07:03.615 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.874 [2024-07-15 20:57:31.051844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.874 [2024-07-15 20:57:31.138346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.132 [2024-07-15 20:57:31.197608] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:04.132 [2024-07-15 20:57:31.213895] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:07:04.132 INFO: Running with entropic power schedule (0xFF, 100). 00:07:04.132 INFO: Seed: 1010437063 00:07:04.132 INFO: Loaded 1 modules (358191 inline 8-bit counters): 358191 [0x29b254c, 0x2a09c7b), 00:07:04.132 INFO: Loaded 1 PC tables (358191 PCs): 358191 [0x2a09c80,0x2f80f70), 00:07:04.132 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:04.132 INFO: A corpus is not provided, starting from an empty corpus 00:07:04.132 #2 INITED exec/s: 0 rss: 63Mb 00:07:04.132 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:04.132 This may also happen if the target rejected all inputs we tried so far 00:07:04.391 NEW_FUNC[1/685]: 0x488f00 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:07:04.391 NEW_FUNC[2/685]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:04.391 #5 NEW cov: 11784 ft: 11785 corp: 2/5b lim: 20 exec/s: 0 rss: 70Mb L: 4/4 MS: 3 ChangeBit-InsertByte-CopyPart- 00:07:04.391 #6 NEW cov: 11914 ft: 12528 corp: 3/9b lim: 20 exec/s: 0 rss: 71Mb L: 4/4 MS: 1 CopyPart- 00:07:04.649 #7 NEW cov: 11934 ft: 13043 corp: 4/17b lim: 20 exec/s: 0 rss: 71Mb L: 8/8 MS: 1 CrossOver- 00:07:04.649 #8 NEW cov: 12019 ft: 13351 corp: 5/25b lim: 20 exec/s: 0 rss: 71Mb L: 8/8 MS: 1 ChangeBinInt- 00:07:04.649 #9 NEW cov: 12019 ft: 13448 corp: 6/33b lim: 20 exec/s: 0 rss: 71Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:04.649 #10 NEW cov: 12019 ft: 13512 corp: 7/41b lim: 20 exec/s: 0 rss: 71Mb L: 8/8 MS: 1 CrossOver- 00:07:04.908 #11 NEW cov: 12019 ft: 13605 corp: 8/48b lim: 20 exec/s: 0 rss: 71Mb L: 7/8 MS: 1 EraseBytes- 00:07:04.908 #16 NEW cov: 12019 ft: 13628 corp: 9/52b lim: 20 exec/s: 0 rss: 71Mb L: 4/8 MS: 5 InsertByte-ShuffleBytes-CrossOver-InsertByte-InsertByte- 00:07:04.908 #17 NEW cov: 12019 ft: 13641 corp: 10/60b lim: 20 exec/s: 0 rss: 71Mb L: 8/8 MS: 1 ChangeBit- 00:07:04.908 #18 NEW cov: 12019 ft: 13745 corp: 11/64b lim: 20 exec/s: 0 rss: 71Mb L: 4/8 MS: 1 EraseBytes- 00:07:04.908 NEW_FUNC[1/1]: 0x1a7d240 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:04.908 #19 NEW cov: 12042 ft: 13850 corp: 12/72b lim: 20 exec/s: 0 rss: 71Mb L: 8/8 MS: 1 ChangeBit- 00:07:04.908 [2024-07-15 20:57:32.183414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:04.908 [2024-07-15 20:57:32.183459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.167 NEW_FUNC[1/17]: 0x11d8320 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3359 00:07:05.167 NEW_FUNC[2/17]: 0x11d8ea0 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3301 00:07:05.167 #20 NEW cov: 12302 ft: 14465 corp: 13/88b lim: 20 exec/s: 0 rss: 72Mb L: 16/16 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\017"- 00:07:05.167 #21 NEW cov: 12302 ft: 14478 corp: 14/96b lim: 20 exec/s: 21 rss: 72Mb L: 8/16 MS: 1 ChangeByte- 00:07:05.167 #22 NEW cov: 12302 ft: 14527 corp: 15/104b lim: 20 exec/s: 22 rss: 72Mb L: 8/16 MS: 1 ChangeBit- 00:07:05.167 #23 NEW cov: 12302 ft: 14600 corp: 16/111b lim: 20 exec/s: 23 rss: 72Mb L: 7/16 MS: 1 ShuffleBytes- 00:07:05.167 #24 NEW cov: 12302 ft: 14651 corp: 17/119b lim: 20 exec/s: 24 rss: 72Mb L: 8/16 MS: 1 ChangeBit- 00:07:05.426 #25 NEW cov: 12302 ft: 14700 corp: 18/123b lim: 20 exec/s: 25 rss: 72Mb L: 4/16 MS: 1 ChangeByte- 00:07:05.426 #26 NEW cov: 12302 ft: 14761 corp: 19/132b lim: 20 exec/s: 26 rss: 72Mb L: 9/16 MS: 1 CrossOver- 00:07:05.426 #30 NEW cov: 12302 ft: 14792 corp: 20/136b lim: 20 exec/s: 30 rss: 72Mb L: 4/16 MS: 4 EraseBytes-ShuffleBytes-ChangeASCIIInt-CrossOver- 00:07:05.426 #31 NEW cov: 12302 ft: 14811 corp: 21/144b lim: 20 exec/s: 31 rss: 72Mb L: 8/16 MS: 1 CopyPart- 00:07:05.685 NEW_FUNC[1/5]: 0x1171ab0 in nvmf_ctrlr_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3432 00:07:05.685 NEW_FUNC[2/5]: 0x11726c0 in spdk_nvmf_request_get_bdev /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:4923 00:07:05.685 #32 NEW cov: 12410 ft: 14959 corp: 22/148b lim: 20 exec/s: 32 rss: 72Mb L: 4/16 MS: 1 ChangeBinInt- 00:07:05.685 #33 NEW cov: 12412 ft: 15011 corp: 23/164b lim: 20 exec/s: 33 rss: 72Mb L: 16/16 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\017"- 00:07:05.685 #34 NEW cov: 12412 ft: 15052 corp: 24/168b lim: 20 exec/s: 34 rss: 72Mb L: 4/16 MS: 1 ChangeBinInt- 00:07:05.685 #35 NEW cov: 12412 ft: 15061 corp: 25/177b lim: 20 exec/s: 35 rss: 72Mb L: 9/16 MS: 1 InsertByte- 00:07:05.685 #36 NEW cov: 12412 ft: 15077 corp: 26/186b lim: 20 exec/s: 36 rss: 72Mb L: 9/16 MS: 1 CrossOver- 00:07:05.944 #37 NEW cov: 12412 ft: 15087 corp: 27/194b lim: 20 exec/s: 37 rss: 72Mb L: 8/16 MS: 1 InsertRepeatedBytes- 00:07:05.944 #38 NEW cov: 12412 ft: 15157 corp: 28/202b lim: 20 exec/s: 38 rss: 73Mb L: 8/16 MS: 1 ChangeByte- 00:07:05.944 #39 NEW cov: 12412 ft: 15213 corp: 29/210b lim: 20 exec/s: 39 rss: 73Mb L: 8/16 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\017"- 00:07:05.944 #40 NEW cov: 12416 ft: 15311 corp: 30/223b lim: 20 exec/s: 40 rss: 73Mb L: 13/16 MS: 1 CopyPart- 00:07:05.944 #41 NEW cov: 12416 ft: 15312 corp: 31/239b lim: 20 exec/s: 41 rss: 73Mb L: 16/16 MS: 1 ChangeBit- 00:07:06.202 #42 NEW cov: 12416 ft: 15349 corp: 32/248b lim: 20 exec/s: 21 rss: 73Mb L: 9/16 MS: 1 ChangeBinInt- 00:07:06.202 #42 DONE cov: 12416 ft: 15349 corp: 32/248b lim: 20 exec/s: 21 rss: 73Mb 00:07:06.202 ###### Recommended dictionary. ###### 00:07:06.202 "\001\000\000\000\000\000\000\017" # Uses: 2 00:07:06.202 ###### End of recommended dictionary. ###### 00:07:06.202 Done 42 runs in 2 second(s) 00:07:06.202 20:57:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:07:06.202 20:57:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:06.202 20:57:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:06.202 20:57:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:06.202 20:57:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:07:06.202 20:57:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:06.202 20:57:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:06.202 20:57:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:06.202 20:57:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:07:06.202 20:57:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:06.202 20:57:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:06.202 20:57:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:07:06.202 20:57:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:07:06.202 20:57:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:06.202 20:57:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:07:06.202 20:57:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:06.202 20:57:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:06.202 20:57:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:06.202 20:57:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:07:06.202 [2024-07-15 20:57:33.445091] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:06.202 [2024-07-15 20:57:33.445157] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid782408 ] 00:07:06.202 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.461 [2024-07-15 20:57:33.696745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.720 [2024-07-15 20:57:33.788949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.720 [2024-07-15 20:57:33.847863] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.720 [2024-07-15 20:57:33.864154] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:07:06.720 INFO: Running with entropic power schedule (0xFF, 100). 00:07:06.720 INFO: Seed: 3659419299 00:07:06.720 INFO: Loaded 1 modules (358191 inline 8-bit counters): 358191 [0x29b254c, 0x2a09c7b), 00:07:06.720 INFO: Loaded 1 PC tables (358191 PCs): 358191 [0x2a09c80,0x2f80f70), 00:07:06.720 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:06.720 INFO: A corpus is not provided, starting from an empty corpus 00:07:06.720 #2 INITED exec/s: 0 rss: 63Mb 00:07:06.720 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:06.720 This may also happen if the target rejected all inputs we tried so far 00:07:06.720 [2024-07-15 20:57:33.909921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:9c9c0a9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.720 [2024-07-15 20:57:33.909948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.720 [2024-07-15 20:57:33.910006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.720 [2024-07-15 20:57:33.910020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.720 [2024-07-15 20:57:33.910073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.720 [2024-07-15 20:57:33.910086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.720 [2024-07-15 20:57:33.910139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.720 [2024-07-15 20:57:33.910152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.978 NEW_FUNC[1/697]: 0x489ff0 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:07:06.978 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:06.978 #7 NEW cov: 11908 ft: 11901 corp: 2/35b lim: 35 exec/s: 0 rss: 70Mb L: 34/34 MS: 5 InsertByte-CopyPart-ShuffleBytes-EraseBytes-InsertRepeatedBytes- 00:07:06.978 [2024-07-15 20:57:34.240658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:9c9c0a9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.978 [2024-07-15 20:57:34.240689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.978 [2024-07-15 20:57:34.240744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.978 [2024-07-15 20:57:34.240758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.978 [2024-07-15 20:57:34.240810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.978 [2024-07-15 20:57:34.240823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.978 [2024-07-15 20:57:34.240877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.978 [2024-07-15 20:57:34.240890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.237 #8 NEW cov: 12038 ft: 12580 corp: 3/69b lim: 35 exec/s: 0 rss: 70Mb L: 34/34 MS: 1 ChangeByte- 00:07:07.237 [2024-07-15 20:57:34.290263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:bb61d4d0 cdw11:ca440000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.237 [2024-07-15 20:57:34.290289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.237 #10 NEW cov: 12044 ft: 13622 corp: 4/78b lim: 35 exec/s: 0 rss: 70Mb L: 9/34 MS: 2 ChangeByte-CMP- DE: "\324\320\273a\312D+\000"- 00:07:07.237 [2024-07-15 20:57:34.330997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:9c9c0a9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.237 [2024-07-15 20:57:34.331022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.237 [2024-07-15 20:57:34.331076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.237 [2024-07-15 20:57:34.331090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.237 [2024-07-15 20:57:34.331142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.237 [2024-07-15 20:57:34.331154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.237 [2024-07-15 20:57:34.331207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:9c609c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.237 [2024-07-15 20:57:34.331221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.237 [2024-07-15 20:57:34.331271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:9c9c3a9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.237 [2024-07-15 20:57:34.331285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:07.237 #11 NEW cov: 12129 ft: 13909 corp: 5/113b lim: 35 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 InsertByte- 00:07:07.237 [2024-07-15 20:57:34.380978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:9c9c0a9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.237 [2024-07-15 20:57:34.381003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.237 [2024-07-15 20:57:34.381057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.237 [2024-07-15 20:57:34.381070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.237 [2024-07-15 20:57:34.381122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2b449c00 cdw11:ca6e0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.237 [2024-07-15 20:57:34.381135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.237 [2024-07-15 20:57:34.381186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:9c9ca7ba cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.237 [2024-07-15 20:57:34.381199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.237 #12 NEW cov: 12129 ft: 14078 corp: 6/147b lim: 35 exec/s: 0 rss: 70Mb L: 34/35 MS: 1 CMP- DE: "\000+D\312n\364\247\272"- 00:07:07.237 [2024-07-15 20:57:34.421237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:9c9c0a9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.237 [2024-07-15 20:57:34.421262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.237 [2024-07-15 20:57:34.421317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.237 [2024-07-15 20:57:34.421330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.237 [2024-07-15 20:57:34.421383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.237 [2024-07-15 20:57:34.421397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.237 [2024-07-15 20:57:34.421452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:9c609c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.237 [2024-07-15 20:57:34.421465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.237 [2024-07-15 20:57:34.421517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:9c9c3a9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.237 [2024-07-15 20:57:34.421531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:07.237 #13 NEW cov: 12129 ft: 14136 corp: 7/182b lim: 35 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 CopyPart- 00:07:07.237 [2024-07-15 20:57:34.471352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:9c9c0a9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.237 [2024-07-15 20:57:34.471376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.237 [2024-07-15 20:57:34.471432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.237 [2024-07-15 20:57:34.471451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.237 [2024-07-15 20:57:34.471503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.237 [2024-07-15 20:57:34.471516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.237 [2024-07-15 20:57:34.471569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:9c609c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.237 [2024-07-15 20:57:34.471582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.237 [2024-07-15 20:57:34.471633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:0a9c3a9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.237 [2024-07-15 20:57:34.471646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:07.237 #14 NEW cov: 12129 ft: 14167 corp: 8/217b lim: 35 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 CrossOver- 00:07:07.237 [2024-07-15 20:57:34.521514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:9c9c0a9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.238 [2024-07-15 20:57:34.521538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.238 [2024-07-15 20:57:34.521595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.238 [2024-07-15 20:57:34.521611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.238 [2024-07-15 20:57:34.521663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:002b419c cdw11:44ca0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.238 [2024-07-15 20:57:34.521676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.238 [2024-07-15 20:57:34.521728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ba9cf4a7 cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.238 [2024-07-15 20:57:34.521743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.238 [2024-07-15 20:57:34.521797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.238 [2024-07-15 20:57:34.521809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:07.496 #15 NEW cov: 12129 ft: 14225 corp: 9/252b lim: 35 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 InsertByte- 00:07:07.496 [2024-07-15 20:57:34.571509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:9c9c0a9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.496 [2024-07-15 20:57:34.571534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.496 [2024-07-15 20:57:34.571588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.496 [2024-07-15 20:57:34.571602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.496 [2024-07-15 20:57:34.571654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.496 [2024-07-15 20:57:34.571667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.496 [2024-07-15 20:57:34.571721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.496 [2024-07-15 20:57:34.571734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.496 #16 NEW cov: 12129 ft: 14252 corp: 10/286b lim: 35 exec/s: 0 rss: 70Mb L: 34/35 MS: 1 ChangeByte- 00:07:07.496 [2024-07-15 20:57:34.611141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:bb2cd4d0 cdw11:ca440000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.496 [2024-07-15 20:57:34.611165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.496 #17 NEW cov: 12129 ft: 14312 corp: 11/295b lim: 35 exec/s: 0 rss: 71Mb L: 9/35 MS: 1 ChangeByte- 00:07:07.496 [2024-07-15 20:57:34.661302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:bb2cd4d0 cdw11:ca440000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.496 [2024-07-15 20:57:34.661327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.496 #18 NEW cov: 12129 ft: 14332 corp: 12/308b lim: 35 exec/s: 0 rss: 71Mb L: 13/35 MS: 1 CopyPart- 00:07:07.496 [2024-07-15 20:57:34.711481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ca44d4d0 cdw11:ca440000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.496 [2024-07-15 20:57:34.711508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.496 #19 NEW cov: 12129 ft: 14351 corp: 13/317b lim: 35 exec/s: 0 rss: 71Mb L: 9/35 MS: 1 CrossOver- 00:07:07.496 [2024-07-15 20:57:34.751728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:9c9c0a9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.496 [2024-07-15 20:57:34.751752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.496 [2024-07-15 20:57:34.751806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.496 [2024-07-15 20:57:34.751819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.496 #20 NEW cov: 12129 ft: 14594 corp: 14/335b lim: 35 exec/s: 0 rss: 71Mb L: 18/35 MS: 1 EraseBytes- 00:07:07.754 [2024-07-15 20:57:34.801707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:9c9c0a9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.754 [2024-07-15 20:57:34.801731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.754 NEW_FUNC[1/1]: 0x1a7d240 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:07.754 #21 NEW cov: 12152 ft: 14630 corp: 15/344b lim: 35 exec/s: 0 rss: 71Mb L: 9/35 MS: 1 EraseBytes- 00:07:07.754 [2024-07-15 20:57:34.852459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:9c9c0a9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.754 [2024-07-15 20:57:34.852483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.754 [2024-07-15 20:57:34.852538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:63639c64 cdw11:599c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.754 [2024-07-15 20:57:34.852553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.754 [2024-07-15 20:57:34.852607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.754 [2024-07-15 20:57:34.852621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.754 [2024-07-15 20:57:34.852674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:9c609c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.754 [2024-07-15 20:57:34.852688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.754 [2024-07-15 20:57:34.852743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:9c9c3a9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.754 [2024-07-15 20:57:34.852756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:07.754 #22 NEW cov: 12152 ft: 14655 corp: 16/379b lim: 35 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:07.754 [2024-07-15 20:57:34.891935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:bb61d4d0 cdw11:ca440000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.754 [2024-07-15 20:57:34.891959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.754 #23 NEW cov: 12152 ft: 14686 corp: 17/388b lim: 35 exec/s: 23 rss: 71Mb L: 9/35 MS: 1 ChangeByte- 00:07:07.754 [2024-07-15 20:57:34.932723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:9c9c0a9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.754 [2024-07-15 20:57:34.932749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.754 [2024-07-15 20:57:34.932804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.754 [2024-07-15 20:57:34.932817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.754 [2024-07-15 20:57:34.932869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.754 [2024-07-15 20:57:34.932882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.754 [2024-07-15 20:57:34.932932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.754 [2024-07-15 20:57:34.932948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.754 [2024-07-15 20:57:34.932999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.754 [2024-07-15 20:57:34.933013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:07.754 #24 NEW cov: 12152 ft: 14709 corp: 18/423b lim: 35 exec/s: 24 rss: 71Mb L: 35/35 MS: 1 InsertByte- 00:07:07.754 [2024-07-15 20:57:34.972841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:9c9c0a9c cdw11:9c1b0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.754 [2024-07-15 20:57:34.972864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.754 [2024-07-15 20:57:34.972917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.754 [2024-07-15 20:57:34.972930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.754 [2024-07-15 20:57:34.972982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.754 [2024-07-15 20:57:34.972995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.754 [2024-07-15 20:57:34.973045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:9c609c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.754 [2024-07-15 20:57:34.973058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.754 [2024-07-15 20:57:34.973110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:0a9c3a9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.754 [2024-07-15 20:57:34.973123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:07.754 #25 NEW cov: 12152 ft: 14712 corp: 19/458b lim: 35 exec/s: 25 rss: 71Mb L: 35/35 MS: 1 ChangeByte- 00:07:07.754 [2024-07-15 20:57:35.022674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:9c9c0a9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.754 [2024-07-15 20:57:35.022698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.754 [2024-07-15 20:57:35.022750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.754 [2024-07-15 20:57:35.022764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.754 [2024-07-15 20:57:35.022820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.754 [2024-07-15 20:57:35.022834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.014 #26 NEW cov: 12152 ft: 14921 corp: 20/481b lim: 35 exec/s: 26 rss: 71Mb L: 23/35 MS: 1 CrossOver- 00:07:08.014 [2024-07-15 20:57:35.072533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:bb61d4d0 cdw11:ca610003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.014 [2024-07-15 20:57:35.072557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.014 #27 NEW cov: 12152 ft: 14981 corp: 21/490b lim: 35 exec/s: 27 rss: 71Mb L: 9/35 MS: 1 CopyPart- 00:07:08.014 [2024-07-15 20:57:35.112620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:bb61acd0 cdw11:ca440000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.014 [2024-07-15 20:57:35.112644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.014 #28 NEW cov: 12152 ft: 14997 corp: 22/499b lim: 35 exec/s: 28 rss: 71Mb L: 9/35 MS: 1 ChangeByte- 00:07:08.014 [2024-07-15 20:57:35.153350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:9c9c0a9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.014 [2024-07-15 20:57:35.153374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.014 [2024-07-15 20:57:35.153426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.014 [2024-07-15 20:57:35.153440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.014 [2024-07-15 20:57:35.153496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:002b419c cdw11:449c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.014 [2024-07-15 20:57:35.153509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.014 [2024-07-15 20:57:35.153561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ba9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.014 [2024-07-15 20:57:35.153574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.014 [2024-07-15 20:57:35.153624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.014 [2024-07-15 20:57:35.153638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:08.014 #29 NEW cov: 12152 ft: 15062 corp: 23/534b lim: 35 exec/s: 29 rss: 71Mb L: 35/35 MS: 1 CopyPart- 00:07:08.014 [2024-07-15 20:57:35.203350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:9c9c0a9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.014 [2024-07-15 20:57:35.203374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.014 [2024-07-15 20:57:35.203427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.014 [2024-07-15 20:57:35.203446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.014 [2024-07-15 20:57:35.203498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:9c9c9c9c cdw11:9c640002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.014 [2024-07-15 20:57:35.203515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.014 [2024-07-15 20:57:35.203565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.014 [2024-07-15 20:57:35.203579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.014 #30 NEW cov: 12152 ft: 15075 corp: 24/568b lim: 35 exec/s: 30 rss: 71Mb L: 34/35 MS: 1 ChangeBinInt- 00:07:08.014 [2024-07-15 20:57:35.243154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:9c9cd49c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.014 [2024-07-15 20:57:35.243178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.014 [2024-07-15 20:57:35.243230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:60d09c9c cdw11:ca440003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.014 [2024-07-15 20:57:35.243244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.014 #31 NEW cov: 12152 ft: 15106 corp: 25/586b lim: 35 exec/s: 31 rss: 72Mb L: 18/35 MS: 1 CrossOver- 00:07:08.014 [2024-07-15 20:57:35.293140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8ebbd4d0 cdw11:61ca0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.014 [2024-07-15 20:57:35.293164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.274 #32 NEW cov: 12152 ft: 15114 corp: 26/596b lim: 35 exec/s: 32 rss: 72Mb L: 10/35 MS: 1 InsertByte- 00:07:08.274 [2024-07-15 20:57:35.343294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:bb61acd0 cdw11:ca440000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.274 [2024-07-15 20:57:35.343318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.274 #33 NEW cov: 12152 ft: 15133 corp: 27/605b lim: 35 exec/s: 33 rss: 72Mb L: 9/35 MS: 1 ChangeBinInt- 00:07:08.274 [2024-07-15 20:57:35.393751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:9c9c0a9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.274 [2024-07-15 20:57:35.393775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.274 [2024-07-15 20:57:35.393828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.274 [2024-07-15 20:57:35.393842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.274 [2024-07-15 20:57:35.393893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.274 [2024-07-15 20:57:35.393906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.274 #34 NEW cov: 12152 ft: 15168 corp: 28/628b lim: 35 exec/s: 34 rss: 72Mb L: 23/35 MS: 1 ChangeBit- 00:07:08.274 [2024-07-15 20:57:35.444029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:9c9c0a9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.274 [2024-07-15 20:57:35.444054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.274 [2024-07-15 20:57:35.444109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:9c419c9c cdw11:9c000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.274 [2024-07-15 20:57:35.444123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.274 [2024-07-15 20:57:35.444177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:6ef444ca cdw11:a7ba0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.274 [2024-07-15 20:57:35.444191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.274 [2024-07-15 20:57:35.444242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.274 [2024-07-15 20:57:35.444255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.274 #35 NEW cov: 12152 ft: 15173 corp: 29/659b lim: 35 exec/s: 35 rss: 72Mb L: 31/35 MS: 1 EraseBytes- 00:07:08.274 [2024-07-15 20:57:35.484277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:9c9c0a9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.274 [2024-07-15 20:57:35.484302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.274 [2024-07-15 20:57:35.484356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:9c419c9c cdw11:9c000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.274 [2024-07-15 20:57:35.484370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.274 [2024-07-15 20:57:35.484421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:002b0000 cdw11:44ca0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.274 [2024-07-15 20:57:35.484435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.274 [2024-07-15 20:57:35.484491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ba9cf4a7 cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.274 [2024-07-15 20:57:35.484504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.274 [2024-07-15 20:57:35.484556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.274 [2024-07-15 20:57:35.484570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:08.274 #36 NEW cov: 12152 ft: 15185 corp: 30/694b lim: 35 exec/s: 36 rss: 72Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:08.274 [2024-07-15 20:57:35.534423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:9c9c0a9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.274 [2024-07-15 20:57:35.534452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.274 [2024-07-15 20:57:35.534521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.274 [2024-07-15 20:57:35.534536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.274 [2024-07-15 20:57:35.534586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:002b419c cdw11:449c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.274 [2024-07-15 20:57:35.534600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.274 [2024-07-15 20:57:35.534650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ba9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.274 [2024-07-15 20:57:35.534663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.274 [2024-07-15 20:57:35.534713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:9c9c9c9c cdw11:2a9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.274 [2024-07-15 20:57:35.534730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:08.535 #37 NEW cov: 12152 ft: 15193 corp: 31/729b lim: 35 exec/s: 37 rss: 72Mb L: 35/35 MS: 1 ChangeByte- 00:07:08.535 [2024-07-15 20:57:35.584419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:9c9c0a9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.535 [2024-07-15 20:57:35.584451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.535 [2024-07-15 20:57:35.584505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.535 [2024-07-15 20:57:35.584518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.535 [2024-07-15 20:57:35.584569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:9cf49c9c cdw11:a7ba0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.535 [2024-07-15 20:57:35.584582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.535 [2024-07-15 20:57:35.584634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.535 [2024-07-15 20:57:35.584648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.535 #38 NEW cov: 12152 ft: 15206 corp: 32/760b lim: 35 exec/s: 38 rss: 72Mb L: 31/35 MS: 1 CrossOver- 00:07:08.535 [2024-07-15 20:57:35.624381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:9c9c0a9c cdw11:9c9c0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.535 [2024-07-15 20:57:35.624405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.535 [2024-07-15 20:57:35.624463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.535 [2024-07-15 20:57:35.624476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.535 [2024-07-15 20:57:35.624541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.535 [2024-07-15 20:57:35.624555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.535 #39 NEW cov: 12152 ft: 15287 corp: 33/781b lim: 35 exec/s: 39 rss: 72Mb L: 21/35 MS: 1 CrossOver- 00:07:08.535 [2024-07-15 20:57:35.664331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:bb2cd4d0 cdw11:ca440000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.535 [2024-07-15 20:57:35.664356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.535 [2024-07-15 20:57:35.664410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2b2bca44 cdw11:ca440000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.535 [2024-07-15 20:57:35.664423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.535 #40 NEW cov: 12152 ft: 15302 corp: 34/799b lim: 35 exec/s: 40 rss: 72Mb L: 18/35 MS: 1 CopyPart- 00:07:08.535 [2024-07-15 20:57:35.714324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8ebbd4d0 cdw11:61ca0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.535 [2024-07-15 20:57:35.714349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.535 #41 NEW cov: 12152 ft: 15314 corp: 35/809b lim: 35 exec/s: 41 rss: 72Mb L: 10/35 MS: 1 CopyPart- 00:07:08.535 [2024-07-15 20:57:35.765086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:9c9c0a9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.535 [2024-07-15 20:57:35.765111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.535 [2024-07-15 20:57:35.765165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.535 [2024-07-15 20:57:35.765179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.535 [2024-07-15 20:57:35.765231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:0000419c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.535 [2024-07-15 20:57:35.765244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.535 [2024-07-15 20:57:35.765295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ba9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.535 [2024-07-15 20:57:35.765308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.535 [2024-07-15 20:57:35.765358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.535 [2024-07-15 20:57:35.765371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:08.535 #42 NEW cov: 12152 ft: 15318 corp: 36/844b lim: 35 exec/s: 42 rss: 73Mb L: 35/35 MS: 1 CMP- DE: "\000\000\000\000"- 00:07:08.535 [2024-07-15 20:57:35.805062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:9c9c289c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.535 [2024-07-15 20:57:35.805087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.535 [2024-07-15 20:57:35.805141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.535 [2024-07-15 20:57:35.805154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.535 [2024-07-15 20:57:35.805206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:9cf49c9c cdw11:a7ba0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.535 [2024-07-15 20:57:35.805219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.535 [2024-07-15 20:57:35.805270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.535 [2024-07-15 20:57:35.805286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.796 #43 NEW cov: 12152 ft: 15343 corp: 37/875b lim: 35 exec/s: 43 rss: 73Mb L: 31/35 MS: 1 ChangeByte- 00:07:08.796 [2024-07-15 20:57:35.854722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:bbc6acd0 cdw11:ca440000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.796 [2024-07-15 20:57:35.854746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.796 #44 NEW cov: 12152 ft: 15348 corp: 38/884b lim: 35 exec/s: 44 rss: 73Mb L: 9/35 MS: 1 ChangeByte- 00:07:08.796 [2024-07-15 20:57:35.905026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:9c9c0a9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.796 [2024-07-15 20:57:35.905050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.796 [2024-07-15 20:57:35.905106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:9c9c9c9c cdw11:9c9c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.796 [2024-07-15 20:57:35.905120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.796 #45 NEW cov: 12152 ft: 15355 corp: 39/904b lim: 35 exec/s: 22 rss: 73Mb L: 20/35 MS: 1 EraseBytes- 00:07:08.796 #45 DONE cov: 12152 ft: 15355 corp: 39/904b lim: 35 exec/s: 22 rss: 73Mb 00:07:08.796 ###### Recommended dictionary. ###### 00:07:08.796 "\324\320\273a\312D+\000" # Uses: 0 00:07:08.796 "\000+D\312n\364\247\272" # Uses: 0 00:07:08.796 "\000\000\000\000" # Uses: 0 00:07:08.796 ###### End of recommended dictionary. ###### 00:07:08.796 Done 45 runs in 2 second(s) 00:07:08.796 20:57:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:07:08.796 20:57:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:08.796 20:57:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:08.796 20:57:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:08.796 20:57:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:07:08.796 20:57:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:08.796 20:57:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:08.796 20:57:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:08.796 20:57:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:07:08.796 20:57:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:08.796 20:57:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:08.796 20:57:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:07:08.796 20:57:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:07:08.796 20:57:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:08.796 20:57:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:07:08.796 20:57:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:08.796 20:57:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:08.796 20:57:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:08.796 20:57:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:07:09.055 [2024-07-15 20:57:36.094190] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:09.055 [2024-07-15 20:57:36.094275] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid782944 ] 00:07:09.055 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.055 [2024-07-15 20:57:36.270425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.055 [2024-07-15 20:57:36.335393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.315 [2024-07-15 20:57:36.394685] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:09.315 [2024-07-15 20:57:36.410977] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:07:09.315 INFO: Running with entropic power schedule (0xFF, 100). 00:07:09.315 INFO: Seed: 1911451805 00:07:09.315 INFO: Loaded 1 modules (358191 inline 8-bit counters): 358191 [0x29b254c, 0x2a09c7b), 00:07:09.315 INFO: Loaded 1 PC tables (358191 PCs): 358191 [0x2a09c80,0x2f80f70), 00:07:09.315 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:09.315 INFO: A corpus is not provided, starting from an empty corpus 00:07:09.315 #2 INITED exec/s: 0 rss: 63Mb 00:07:09.316 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:09.316 This may also happen if the target rejected all inputs we tried so far 00:07:09.316 [2024-07-15 20:57:36.456463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:7c7c7c7c cdw11:7c7c0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.316 [2024-07-15 20:57:36.456492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.316 [2024-07-15 20:57:36.456547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:7c7c7c7c cdw11:7c7c0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.316 [2024-07-15 20:57:36.456562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.574 NEW_FUNC[1/697]: 0x48c180 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:07:09.574 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:09.574 #9 NEW cov: 11919 ft: 11913 corp: 2/20b lim: 45 exec/s: 0 rss: 70Mb L: 19/19 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:09.574 [2024-07-15 20:57:36.777270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.574 [2024-07-15 20:57:36.777301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.574 [2024-07-15 20:57:36.777355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.574 [2024-07-15 20:57:36.777369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.574 [2024-07-15 20:57:36.777420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.574 [2024-07-15 20:57:36.777434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.574 #14 NEW cov: 12049 ft: 12813 corp: 3/53b lim: 45 exec/s: 0 rss: 70Mb L: 33/33 MS: 5 ChangeBit-ChangeByte-InsertByte-EraseBytes-InsertRepeatedBytes- 00:07:09.574 [2024-07-15 20:57:36.817161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:7c7c7c7c cdw11:7c7c0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.574 [2024-07-15 20:57:36.817187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.574 [2024-07-15 20:57:36.817242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:007c7c00 cdw11:7c7c0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.574 [2024-07-15 20:57:36.817256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.574 #15 NEW cov: 12055 ft: 12975 corp: 4/76b lim: 45 exec/s: 0 rss: 70Mb L: 23/33 MS: 1 CrossOver- 00:07:09.833 [2024-07-15 20:57:36.867476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.833 [2024-07-15 20:57:36.867503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.833 [2024-07-15 20:57:36.867572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000010 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.833 [2024-07-15 20:57:36.867591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.833 [2024-07-15 20:57:36.867643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.833 [2024-07-15 20:57:36.867658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.833 #16 NEW cov: 12140 ft: 13305 corp: 5/109b lim: 45 exec/s: 0 rss: 70Mb L: 33/33 MS: 1 ChangeBit- 00:07:09.833 [2024-07-15 20:57:36.917672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.833 [2024-07-15 20:57:36.917697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.833 [2024-07-15 20:57:36.917752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.833 [2024-07-15 20:57:36.917766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.833 [2024-07-15 20:57:36.917818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.833 [2024-07-15 20:57:36.917831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.833 #17 NEW cov: 12143 ft: 13604 corp: 6/142b lim: 45 exec/s: 0 rss: 70Mb L: 33/33 MS: 1 ShuffleBytes- 00:07:09.833 [2024-07-15 20:57:36.957917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.833 [2024-07-15 20:57:36.957942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.833 [2024-07-15 20:57:36.957996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00a20010 cdw11:59870007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.833 [2024-07-15 20:57:36.958011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.833 [2024-07-15 20:57:36.958061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00002b00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.833 [2024-07-15 20:57:36.958075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.833 [2024-07-15 20:57:36.958127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.833 [2024-07-15 20:57:36.958140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.833 #18 NEW cov: 12143 ft: 14013 corp: 7/183b lim: 45 exec/s: 0 rss: 70Mb L: 41/41 MS: 1 CMP- DE: "\242Y\207\355\313D+\000"- 00:07:09.833 [2024-07-15 20:57:37.007862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.834 [2024-07-15 20:57:37.007887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.834 [2024-07-15 20:57:37.007941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.834 [2024-07-15 20:57:37.007955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.834 [2024-07-15 20:57:37.008006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.834 [2024-07-15 20:57:37.008023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.834 #19 NEW cov: 12143 ft: 14108 corp: 8/216b lim: 45 exec/s: 0 rss: 70Mb L: 33/41 MS: 1 CrossOver- 00:07:09.834 [2024-07-15 20:57:37.047962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.834 [2024-07-15 20:57:37.047987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.834 [2024-07-15 20:57:37.048041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.834 [2024-07-15 20:57:37.048055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.834 [2024-07-15 20:57:37.048107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.834 [2024-07-15 20:57:37.048121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.834 #20 NEW cov: 12143 ft: 14210 corp: 9/249b lim: 45 exec/s: 0 rss: 71Mb L: 33/41 MS: 1 ChangeBit- 00:07:09.834 [2024-07-15 20:57:37.098073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.834 [2024-07-15 20:57:37.098098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.834 [2024-07-15 20:57:37.098151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000010 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.834 [2024-07-15 20:57:37.098165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.834 [2024-07-15 20:57:37.098216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000700 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.834 [2024-07-15 20:57:37.098230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.834 #21 NEW cov: 12143 ft: 14233 corp: 10/282b lim: 45 exec/s: 0 rss: 71Mb L: 33/41 MS: 1 ChangeBinInt- 00:07:10.093 [2024-07-15 20:57:37.138234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.093 [2024-07-15 20:57:37.138260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.093 [2024-07-15 20:57:37.138328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00a20000 cdw11:59870007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.093 [2024-07-15 20:57:37.138343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.093 [2024-07-15 20:57:37.138394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00002b00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.093 [2024-07-15 20:57:37.138408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.093 #22 NEW cov: 12143 ft: 14276 corp: 11/315b lim: 45 exec/s: 0 rss: 71Mb L: 33/41 MS: 1 PersAutoDict- DE: "\242Y\207\355\313D+\000"- 00:07:10.093 [2024-07-15 20:57:37.188336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.093 [2024-07-15 20:57:37.188361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.093 [2024-07-15 20:57:37.188417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.093 [2024-07-15 20:57:37.188431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.093 [2024-07-15 20:57:37.188483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.093 [2024-07-15 20:57:37.188498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.093 #23 NEW cov: 12143 ft: 14287 corp: 12/348b lim: 45 exec/s: 0 rss: 71Mb L: 33/41 MS: 1 ChangeBinInt- 00:07:10.093 [2024-07-15 20:57:37.228471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.093 [2024-07-15 20:57:37.228496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.093 [2024-07-15 20:57:37.228548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000010 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.093 [2024-07-15 20:57:37.228562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.093 [2024-07-15 20:57:37.228613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00200000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.093 [2024-07-15 20:57:37.228627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.093 #24 NEW cov: 12143 ft: 14309 corp: 13/381b lim: 45 exec/s: 0 rss: 71Mb L: 33/41 MS: 1 ChangeBit- 00:07:10.093 [2024-07-15 20:57:37.268393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:7c7c7c7c cdw11:7c7c0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.093 [2024-07-15 20:57:37.268418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.093 [2024-07-15 20:57:37.268473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:7c7c7c7c cdw11:7c7c0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.093 [2024-07-15 20:57:37.268488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.093 #25 NEW cov: 12143 ft: 14343 corp: 14/401b lim: 45 exec/s: 0 rss: 71Mb L: 20/41 MS: 1 InsertByte- 00:07:10.093 [2024-07-15 20:57:37.308655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00210000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.093 [2024-07-15 20:57:37.308680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.093 [2024-07-15 20:57:37.308734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.093 [2024-07-15 20:57:37.308747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.093 [2024-07-15 20:57:37.308798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00200000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.093 [2024-07-15 20:57:37.308812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.093 NEW_FUNC[1/1]: 0x1a7d240 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:10.093 #26 NEW cov: 12166 ft: 14410 corp: 15/434b lim: 45 exec/s: 0 rss: 71Mb L: 33/41 MS: 1 ChangeBinInt- 00:07:10.093 [2024-07-15 20:57:37.358820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.093 [2024-07-15 20:57:37.358848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.093 [2024-07-15 20:57:37.358901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.093 [2024-07-15 20:57:37.358914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.093 [2024-07-15 20:57:37.358964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00090000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.093 [2024-07-15 20:57:37.358978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.093 #27 NEW cov: 12166 ft: 14451 corp: 16/467b lim: 45 exec/s: 0 rss: 71Mb L: 33/41 MS: 1 ChangeBinInt- 00:07:10.353 [2024-07-15 20:57:37.398625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:87eda259 cdw11:cb440001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.353 [2024-07-15 20:57:37.398651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.353 #29 NEW cov: 12166 ft: 15189 corp: 17/480b lim: 45 exec/s: 0 rss: 71Mb L: 13/41 MS: 2 CrossOver-PersAutoDict- DE: "\242Y\207\355\313D+\000"- 00:07:10.353 [2024-07-15 20:57:37.439055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.353 [2024-07-15 20:57:37.439080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.353 [2024-07-15 20:57:37.439131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.353 [2024-07-15 20:57:37.439145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.353 [2024-07-15 20:57:37.439195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.353 [2024-07-15 20:57:37.439209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.353 #30 NEW cov: 12166 ft: 15217 corp: 18/513b lim: 45 exec/s: 30 rss: 71Mb L: 33/41 MS: 1 CopyPart- 00:07:10.353 [2024-07-15 20:57:37.489224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.353 [2024-07-15 20:57:37.489249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.353 [2024-07-15 20:57:37.489300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.353 [2024-07-15 20:57:37.489313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.353 [2024-07-15 20:57:37.489364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.353 [2024-07-15 20:57:37.489378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.353 #31 NEW cov: 12166 ft: 15233 corp: 19/546b lim: 45 exec/s: 31 rss: 71Mb L: 33/41 MS: 1 CMP- DE: "\377\377\377\377"- 00:07:10.353 [2024-07-15 20:57:37.529358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.353 [2024-07-15 20:57:37.529383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.353 [2024-07-15 20:57:37.529439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:edcb5987 cdw11:442b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.353 [2024-07-15 20:57:37.529457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.353 [2024-07-15 20:57:37.529509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000700 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.353 [2024-07-15 20:57:37.529523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.353 #32 NEW cov: 12166 ft: 15264 corp: 20/579b lim: 45 exec/s: 32 rss: 71Mb L: 33/41 MS: 1 PersAutoDict- DE: "\242Y\207\355\313D+\000"- 00:07:10.353 [2024-07-15 20:57:37.579494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.353 [2024-07-15 20:57:37.579524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.353 [2024-07-15 20:57:37.579574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.353 [2024-07-15 20:57:37.579587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.353 [2024-07-15 20:57:37.579639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.353 [2024-07-15 20:57:37.579653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.353 #33 NEW cov: 12166 ft: 15273 corp: 21/612b lim: 45 exec/s: 33 rss: 71Mb L: 33/41 MS: 1 ChangeBit- 00:07:10.353 [2024-07-15 20:57:37.619631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:7c7c7c7c cdw11:7c7c0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.353 [2024-07-15 20:57:37.619656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.353 [2024-07-15 20:57:37.619710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00a27c00 cdw11:59870007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.353 [2024-07-15 20:57:37.619724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.353 [2024-07-15 20:57:37.619774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:7c7c2b00 cdw11:7c7c0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.353 [2024-07-15 20:57:37.619788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.612 #34 NEW cov: 12166 ft: 15287 corp: 22/643b lim: 45 exec/s: 34 rss: 71Mb L: 31/41 MS: 1 PersAutoDict- DE: "\242Y\207\355\313D+\000"- 00:07:10.612 [2024-07-15 20:57:37.669760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.612 [2024-07-15 20:57:37.669784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.612 [2024-07-15 20:57:37.669837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.612 [2024-07-15 20:57:37.669851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.612 [2024-07-15 20:57:37.669901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.612 [2024-07-15 20:57:37.669914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.612 #35 NEW cov: 12166 ft: 15301 corp: 23/676b lim: 45 exec/s: 35 rss: 71Mb L: 33/41 MS: 1 ChangeBinInt- 00:07:10.612 [2024-07-15 20:57:37.709832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00210000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.612 [2024-07-15 20:57:37.709857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.612 [2024-07-15 20:57:37.709911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.612 [2024-07-15 20:57:37.709925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.612 [2024-07-15 20:57:37.709976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00200000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.612 [2024-07-15 20:57:37.709990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.612 #36 NEW cov: 12166 ft: 15340 corp: 24/709b lim: 45 exec/s: 36 rss: 71Mb L: 33/41 MS: 1 CMP- DE: "\002\000\000\000\000\000\000\000"- 00:07:10.612 [2024-07-15 20:57:37.760002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.612 [2024-07-15 20:57:37.760026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.612 [2024-07-15 20:57:37.760080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.612 [2024-07-15 20:57:37.760093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.612 [2024-07-15 20:57:37.760142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.612 [2024-07-15 20:57:37.760156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.612 #37 NEW cov: 12166 ft: 15355 corp: 25/742b lim: 45 exec/s: 37 rss: 72Mb L: 33/41 MS: 1 ShuffleBytes- 00:07:10.612 [2024-07-15 20:57:37.810148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.612 [2024-07-15 20:57:37.810172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.612 [2024-07-15 20:57:37.810226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.612 [2024-07-15 20:57:37.810240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.612 [2024-07-15 20:57:37.810290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.612 [2024-07-15 20:57:37.810304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.612 #43 NEW cov: 12166 ft: 15390 corp: 26/775b lim: 45 exec/s: 43 rss: 72Mb L: 33/41 MS: 1 ChangeByte- 00:07:10.612 [2024-07-15 20:57:37.860283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.612 [2024-07-15 20:57:37.860307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.612 [2024-07-15 20:57:37.860361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.612 [2024-07-15 20:57:37.860377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.612 [2024-07-15 20:57:37.860428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.612 [2024-07-15 20:57:37.860447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.612 #44 NEW cov: 12166 ft: 15399 corp: 27/808b lim: 45 exec/s: 44 rss: 72Mb L: 33/41 MS: 1 ShuffleBytes- 00:07:10.871 [2024-07-15 20:57:37.910422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.871 [2024-07-15 20:57:37.910453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.871 [2024-07-15 20:57:37.910506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.871 [2024-07-15 20:57:37.910520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.871 [2024-07-15 20:57:37.910571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00060000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.871 [2024-07-15 20:57:37.910585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.871 #45 NEW cov: 12166 ft: 15410 corp: 28/841b lim: 45 exec/s: 45 rss: 72Mb L: 33/41 MS: 1 CopyPart- 00:07:10.871 [2024-07-15 20:57:37.960578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.871 [2024-07-15 20:57:37.960604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.871 [2024-07-15 20:57:37.960656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.871 [2024-07-15 20:57:37.960671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.871 [2024-07-15 20:57:37.960722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.871 [2024-07-15 20:57:37.960736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.871 #46 NEW cov: 12166 ft: 15430 corp: 29/870b lim: 45 exec/s: 46 rss: 72Mb L: 29/41 MS: 1 EraseBytes- 00:07:10.871 [2024-07-15 20:57:38.000681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:7c7c7c7c cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.871 [2024-07-15 20:57:38.000707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.871 [2024-07-15 20:57:38.000760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:7c7c7c7c cdw11:7c7c0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.871 [2024-07-15 20:57:38.000774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.871 [2024-07-15 20:57:38.000825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:7c7c007c cdw11:7c7c0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.871 [2024-07-15 20:57:38.000839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.871 #47 NEW cov: 12166 ft: 15454 corp: 30/900b lim: 45 exec/s: 47 rss: 72Mb L: 30/41 MS: 1 CopyPart- 00:07:10.871 [2024-07-15 20:57:38.040924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00210000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.871 [2024-07-15 20:57:38.040953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.871 [2024-07-15 20:57:38.041006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.871 [2024-07-15 20:57:38.041020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.871 [2024-07-15 20:57:38.041070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00200000 cdw11:00000004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.871 [2024-07-15 20:57:38.041084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.871 [2024-07-15 20:57:38.041137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00008383 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.871 [2024-07-15 20:57:38.041151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:10.871 #48 NEW cov: 12166 ft: 15470 corp: 31/938b lim: 45 exec/s: 48 rss: 72Mb L: 38/41 MS: 1 InsertRepeatedBytes- 00:07:10.871 [2024-07-15 20:57:38.081027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.871 [2024-07-15 20:57:38.081052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.871 [2024-07-15 20:57:38.081104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000010 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.871 [2024-07-15 20:57:38.081118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.871 [2024-07-15 20:57:38.081170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:0d000d0d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.871 [2024-07-15 20:57:38.081184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.871 [2024-07-15 20:57:38.081233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.871 [2024-07-15 20:57:38.081247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:10.871 #49 NEW cov: 12166 ft: 15477 corp: 32/974b lim: 45 exec/s: 49 rss: 72Mb L: 36/41 MS: 1 InsertRepeatedBytes- 00:07:10.871 [2024-07-15 20:57:38.120976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.871 [2024-07-15 20:57:38.121002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.871 [2024-07-15 20:57:38.121054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.871 [2024-07-15 20:57:38.121068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.871 [2024-07-15 20:57:38.121119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.871 [2024-07-15 20:57:38.121132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.871 #50 NEW cov: 12166 ft: 15512 corp: 33/1007b lim: 45 exec/s: 50 rss: 72Mb L: 33/41 MS: 1 CrossOver- 00:07:10.871 [2024-07-15 20:57:38.160815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:7c7c7c7c cdw11:7c7c0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.871 [2024-07-15 20:57:38.160842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.130 #51 NEW cov: 12166 ft: 15519 corp: 34/1022b lim: 45 exec/s: 51 rss: 72Mb L: 15/41 MS: 1 EraseBytes- 00:07:11.130 [2024-07-15 20:57:38.211383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.130 [2024-07-15 20:57:38.211408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.130 [2024-07-15 20:57:38.211462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000010 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.130 [2024-07-15 20:57:38.211477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.130 [2024-07-15 20:57:38.211544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:0d000d0d cdw11:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.130 [2024-07-15 20:57:38.211558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.130 [2024-07-15 20:57:38.211618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.130 [2024-07-15 20:57:38.211637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:11.130 #52 NEW cov: 12166 ft: 15534 corp: 35/1058b lim: 45 exec/s: 52 rss: 72Mb L: 36/41 MS: 1 ChangeBit- 00:07:11.130 [2024-07-15 20:57:38.261373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.130 [2024-07-15 20:57:38.261398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.130 [2024-07-15 20:57:38.261449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:06000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.130 [2024-07-15 20:57:38.261460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.130 [2024-07-15 20:57:38.261499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.130 [2024-07-15 20:57:38.261513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.130 #53 NEW cov: 12166 ft: 15540 corp: 36/1091b lim: 45 exec/s: 53 rss: 72Mb L: 33/41 MS: 1 ShuffleBytes- 00:07:11.130 [2024-07-15 20:57:38.311384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:7c7c7c7c cdw11:7c7c0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.130 [2024-07-15 20:57:38.311409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.130 [2024-07-15 20:57:38.311464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:7c7c7c00 cdw11:7c7c0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.130 [2024-07-15 20:57:38.311478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.130 #54 NEW cov: 12166 ft: 15545 corp: 37/1114b lim: 45 exec/s: 54 rss: 72Mb L: 23/41 MS: 1 CopyPart- 00:07:11.130 [2024-07-15 20:57:38.351504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.130 [2024-07-15 20:57:38.351528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.130 [2024-07-15 20:57:38.351584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.130 [2024-07-15 20:57:38.351598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.130 #55 NEW cov: 12166 ft: 15574 corp: 38/1137b lim: 45 exec/s: 55 rss: 72Mb L: 23/41 MS: 1 EraseBytes- 00:07:11.130 [2024-07-15 20:57:38.401792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.130 [2024-07-15 20:57:38.401817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.130 [2024-07-15 20:57:38.401870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.130 [2024-07-15 20:57:38.401884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.130 [2024-07-15 20:57:38.401934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00ffff00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.130 [2024-07-15 20:57:38.401947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.389 #56 NEW cov: 12166 ft: 15603 corp: 39/1170b lim: 45 exec/s: 56 rss: 73Mb L: 33/41 MS: 1 ChangeBinInt- 00:07:11.389 [2024-07-15 20:57:38.452108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.389 [2024-07-15 20:57:38.452132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.389 [2024-07-15 20:57:38.452185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.389 [2024-07-15 20:57:38.452199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.389 [2024-07-15 20:57:38.452251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.389 [2024-07-15 20:57:38.452265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.389 [2024-07-15 20:57:38.452317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:8c8c008c cdw11:8c8c0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.389 [2024-07-15 20:57:38.452331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:11.389 #57 NEW cov: 12166 ft: 15618 corp: 40/1210b lim: 45 exec/s: 28 rss: 73Mb L: 40/41 MS: 1 InsertRepeatedBytes- 00:07:11.389 #57 DONE cov: 12166 ft: 15618 corp: 40/1210b lim: 45 exec/s: 28 rss: 73Mb 00:07:11.389 ###### Recommended dictionary. ###### 00:07:11.389 "\242Y\207\355\313D+\000" # Uses: 4 00:07:11.389 "\377\377\377\377" # Uses: 0 00:07:11.389 "\002\000\000\000\000\000\000\000" # Uses: 0 00:07:11.389 ###### End of recommended dictionary. ###### 00:07:11.389 Done 57 runs in 2 second(s) 00:07:11.389 20:57:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:07:11.389 20:57:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:11.389 20:57:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:11.389 20:57:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:11.389 20:57:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:07:11.389 20:57:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:11.389 20:57:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:11.389 20:57:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:11.389 20:57:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:07:11.389 20:57:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:11.389 20:57:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:11.389 20:57:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:07:11.389 20:57:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:07:11.389 20:57:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:11.389 20:57:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:07:11.389 20:57:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:11.389 20:57:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:11.389 20:57:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:11.389 20:57:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:07:11.389 [2024-07-15 20:57:38.641022] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:11.389 [2024-07-15 20:57:38.641094] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid783235 ] 00:07:11.389 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.668 [2024-07-15 20:57:38.825747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.668 [2024-07-15 20:57:38.897487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.668 [2024-07-15 20:57:38.957344] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.925 [2024-07-15 20:57:38.973644] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:07:11.925 INFO: Running with entropic power schedule (0xFF, 100). 00:07:11.925 INFO: Seed: 177493061 00:07:11.925 INFO: Loaded 1 modules (358191 inline 8-bit counters): 358191 [0x29b254c, 0x2a09c7b), 00:07:11.925 INFO: Loaded 1 PC tables (358191 PCs): 358191 [0x2a09c80,0x2f80f70), 00:07:11.925 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:11.925 INFO: A corpus is not provided, starting from an empty corpus 00:07:11.925 #2 INITED exec/s: 0 rss: 64Mb 00:07:11.925 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:11.925 This may also happen if the target rejected all inputs we tried so far 00:07:11.925 [2024-07-15 20:57:39.042543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:11.925 [2024-07-15 20:57:39.042581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.183 NEW_FUNC[1/695]: 0x48e990 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:07:12.183 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:12.183 #10 NEW cov: 11835 ft: 11831 corp: 2/3b lim: 10 exec/s: 0 rss: 70Mb L: 2/2 MS: 3 ShuffleBytes-ShuffleBytes-CopyPart- 00:07:12.183 [2024-07-15 20:57:39.383457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000707 cdw11:00000000 00:07:12.183 [2024-07-15 20:57:39.383497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.183 #13 NEW cov: 11966 ft: 12545 corp: 3/5b lim: 10 exec/s: 0 rss: 70Mb L: 2/2 MS: 3 ChangeBinInt-ChangeByte-CopyPart- 00:07:12.183 [2024-07-15 20:57:39.424006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008686 cdw11:00000000 00:07:12.184 [2024-07-15 20:57:39.424033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.184 [2024-07-15 20:57:39.424154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00008686 cdw11:00000000 00:07:12.184 [2024-07-15 20:57:39.424171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.184 [2024-07-15 20:57:39.424277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00008686 cdw11:00000000 00:07:12.184 [2024-07-15 20:57:39.424295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.184 [2024-07-15 20:57:39.424408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000707 cdw11:00000000 00:07:12.184 [2024-07-15 20:57:39.424426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:12.184 #14 NEW cov: 11972 ft: 12968 corp: 4/13b lim: 10 exec/s: 0 rss: 70Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:07:12.184 [2024-07-15 20:57:39.473627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000707 cdw11:00000000 00:07:12.184 [2024-07-15 20:57:39.473654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.446 #15 NEW cov: 12057 ft: 13264 corp: 5/15b lim: 10 exec/s: 0 rss: 70Mb L: 2/8 MS: 1 ShuffleBytes- 00:07:12.446 [2024-07-15 20:57:39.514339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.446 [2024-07-15 20:57:39.514364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.446 [2024-07-15 20:57:39.514489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.446 [2024-07-15 20:57:39.514507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.446 [2024-07-15 20:57:39.514618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.446 [2024-07-15 20:57:39.514637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.446 [2024-07-15 20:57:39.514757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 00:07:12.446 [2024-07-15 20:57:39.514774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:12.446 #16 NEW cov: 12057 ft: 13327 corp: 6/23b lim: 10 exec/s: 0 rss: 70Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:07:12.446 [2024-07-15 20:57:39.554429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:12.446 [2024-07-15 20:57:39.554460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.446 [2024-07-15 20:57:39.554585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00008686 cdw11:00000000 00:07:12.446 [2024-07-15 20:57:39.554605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.446 [2024-07-15 20:57:39.554720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00008686 cdw11:00000000 00:07:12.446 [2024-07-15 20:57:39.554737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.446 [2024-07-15 20:57:39.554859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000707 cdw11:00000000 00:07:12.446 [2024-07-15 20:57:39.554878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:12.446 #17 NEW cov: 12057 ft: 13375 corp: 7/31b lim: 10 exec/s: 0 rss: 71Mb L: 8/8 MS: 1 CrossOver- 00:07:12.446 [2024-07-15 20:57:39.603952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000007f5 cdw11:00000000 00:07:12.446 [2024-07-15 20:57:39.603978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.446 #18 NEW cov: 12057 ft: 13460 corp: 8/33b lim: 10 exec/s: 0 rss: 71Mb L: 2/8 MS: 1 ChangeBinInt- 00:07:12.446 [2024-07-15 20:57:39.654056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000072c cdw11:00000000 00:07:12.446 [2024-07-15 20:57:39.654083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.446 #19 NEW cov: 12057 ft: 13512 corp: 9/35b lim: 10 exec/s: 0 rss: 71Mb L: 2/8 MS: 1 ChangeByte- 00:07:12.446 [2024-07-15 20:57:39.694221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000707 cdw11:00000000 00:07:12.446 [2024-07-15 20:57:39.694247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.446 #20 NEW cov: 12057 ft: 13590 corp: 10/37b lim: 10 exec/s: 0 rss: 71Mb L: 2/8 MS: 1 ShuffleBytes- 00:07:12.446 [2024-07-15 20:57:39.734361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:12.446 [2024-07-15 20:57:39.734387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.785 #21 NEW cov: 12057 ft: 13651 corp: 11/39b lim: 10 exec/s: 0 rss: 71Mb L: 2/8 MS: 1 ShuffleBytes- 00:07:12.785 [2024-07-15 20:57:39.784439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.785 [2024-07-15 20:57:39.784470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.785 #22 NEW cov: 12057 ft: 13678 corp: 12/41b lim: 10 exec/s: 0 rss: 71Mb L: 2/8 MS: 1 CMP- DE: "\000\000"- 00:07:12.785 [2024-07-15 20:57:39.824536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000707 cdw11:00000000 00:07:12.785 [2024-07-15 20:57:39.824562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.785 #23 NEW cov: 12057 ft: 13713 corp: 13/43b lim: 10 exec/s: 0 rss: 71Mb L: 2/8 MS: 1 ShuffleBytes- 00:07:12.785 [2024-07-15 20:57:39.864709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a07 cdw11:00000000 00:07:12.785 [2024-07-15 20:57:39.864734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.785 #24 NEW cov: 12057 ft: 13731 corp: 14/45b lim: 10 exec/s: 0 rss: 71Mb L: 2/8 MS: 1 CrossOver- 00:07:12.785 [2024-07-15 20:57:39.914783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000307 cdw11:00000000 00:07:12.785 [2024-07-15 20:57:39.914808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.785 NEW_FUNC[1/1]: 0x1a7d240 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:12.785 #25 NEW cov: 12080 ft: 13773 corp: 15/47b lim: 10 exec/s: 0 rss: 71Mb L: 2/8 MS: 1 ChangeBit- 00:07:12.785 [2024-07-15 20:57:39.965000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.785 [2024-07-15 20:57:39.965025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.785 #26 NEW cov: 12080 ft: 13784 corp: 16/49b lim: 10 exec/s: 0 rss: 71Mb L: 2/8 MS: 1 PersAutoDict- DE: "\000\000"- 00:07:12.785 [2024-07-15 20:57:40.015801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002f00 cdw11:00000000 00:07:12.785 [2024-07-15 20:57:40.015827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.785 [2024-07-15 20:57:40.015939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.785 [2024-07-15 20:57:40.015956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.785 [2024-07-15 20:57:40.016067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.785 [2024-07-15 20:57:40.016086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.785 [2024-07-15 20:57:40.016204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 00:07:12.785 [2024-07-15 20:57:40.016222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:12.785 #27 NEW cov: 12080 ft: 13817 corp: 17/57b lim: 10 exec/s: 27 rss: 71Mb L: 8/8 MS: 1 ChangeByte- 00:07:12.785 [2024-07-15 20:57:40.064929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004000 cdw11:00000000 00:07:12.785 [2024-07-15 20:57:40.064959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.043 #28 NEW cov: 12080 ft: 13951 corp: 18/59b lim: 10 exec/s: 28 rss: 71Mb L: 2/8 MS: 1 ChangeBit- 00:07:13.043 [2024-07-15 20:57:40.115450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000072e cdw11:00000000 00:07:13.043 [2024-07-15 20:57:40.115477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.043 #29 NEW cov: 12080 ft: 14025 corp: 19/61b lim: 10 exec/s: 29 rss: 71Mb L: 2/8 MS: 1 ChangeByte- 00:07:13.043 [2024-07-15 20:57:40.155481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000300 cdw11:00000000 00:07:13.043 [2024-07-15 20:57:40.155509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.043 [2024-07-15 20:57:40.155625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 00:07:13.043 [2024-07-15 20:57:40.155645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.043 #30 NEW cov: 12080 ft: 14217 corp: 20/65b lim: 10 exec/s: 30 rss: 72Mb L: 4/8 MS: 1 PersAutoDict- DE: "\000\000"- 00:07:13.043 [2024-07-15 20:57:40.205694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000080 cdw11:00000000 00:07:13.043 [2024-07-15 20:57:40.205721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.043 #31 NEW cov: 12080 ft: 14238 corp: 21/67b lim: 10 exec/s: 31 rss: 72Mb L: 2/8 MS: 1 ChangeBit- 00:07:13.043 [2024-07-15 20:57:40.246383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002f00 cdw11:00000000 00:07:13.043 [2024-07-15 20:57:40.246410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.043 [2024-07-15 20:57:40.246521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:13.043 [2024-07-15 20:57:40.246540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.043 [2024-07-15 20:57:40.246657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:13.043 [2024-07-15 20:57:40.246676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.043 [2024-07-15 20:57:40.246795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:13.044 [2024-07-15 20:57:40.246813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:13.044 [2024-07-15 20:57:40.246930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:07:13.044 [2024-07-15 20:57:40.246948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:13.044 #32 NEW cov: 12080 ft: 14313 corp: 22/77b lim: 10 exec/s: 32 rss: 72Mb L: 10/10 MS: 1 PersAutoDict- DE: "\000\000"- 00:07:13.044 [2024-07-15 20:57:40.295996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:13.044 [2024-07-15 20:57:40.296025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.044 #33 NEW cov: 12080 ft: 14373 corp: 23/80b lim: 10 exec/s: 33 rss: 72Mb L: 3/10 MS: 1 PersAutoDict- DE: "\000\000"- 00:07:13.302 [2024-07-15 20:57:40.335916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000200 cdw11:00000000 00:07:13.302 [2024-07-15 20:57:40.335944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.302 [2024-07-15 20:57:40.336058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 00:07:13.302 [2024-07-15 20:57:40.336076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.302 #34 NEW cov: 12080 ft: 14399 corp: 24/84b lim: 10 exec/s: 34 rss: 72Mb L: 4/10 MS: 1 ChangeBit- 00:07:13.302 [2024-07-15 20:57:40.386213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:13.302 [2024-07-15 20:57:40.386239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.302 #35 NEW cov: 12080 ft: 14451 corp: 25/86b lim: 10 exec/s: 35 rss: 72Mb L: 2/10 MS: 1 CopyPart- 00:07:13.302 [2024-07-15 20:57:40.436306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000307 cdw11:00000000 00:07:13.302 [2024-07-15 20:57:40.436334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.302 #36 NEW cov: 12080 ft: 14464 corp: 26/89b lim: 10 exec/s: 36 rss: 72Mb L: 3/10 MS: 1 InsertByte- 00:07:13.302 [2024-07-15 20:57:40.476862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000003ff cdw11:00000000 00:07:13.302 [2024-07-15 20:57:40.476888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.302 [2024-07-15 20:57:40.476994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:13.302 [2024-07-15 20:57:40.477010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.302 [2024-07-15 20:57:40.477118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff07 cdw11:00000000 00:07:13.302 [2024-07-15 20:57:40.477137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.302 #37 NEW cov: 12080 ft: 14597 corp: 27/95b lim: 10 exec/s: 37 rss: 72Mb L: 6/10 MS: 1 InsertRepeatedBytes- 00:07:13.302 [2024-07-15 20:57:40.516789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000072c cdw11:00000000 00:07:13.302 [2024-07-15 20:57:40.516819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.302 [2024-07-15 20:57:40.516930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000707 cdw11:00000000 00:07:13.302 [2024-07-15 20:57:40.516947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.303 #38 NEW cov: 12080 ft: 14647 corp: 28/99b lim: 10 exec/s: 38 rss: 72Mb L: 4/10 MS: 1 CrossOver- 00:07:13.303 [2024-07-15 20:57:40.566963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:13.303 [2024-07-15 20:57:40.566991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.303 [2024-07-15 20:57:40.567107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:13.303 [2024-07-15 20:57:40.567123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.303 #39 NEW cov: 12080 ft: 14657 corp: 29/103b lim: 10 exec/s: 39 rss: 72Mb L: 4/10 MS: 1 PersAutoDict- DE: "\000\000"- 00:07:13.561 [2024-07-15 20:57:40.606769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 00:07:13.561 [2024-07-15 20:57:40.606797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.561 #40 NEW cov: 12080 ft: 14674 corp: 30/105b lim: 10 exec/s: 40 rss: 72Mb L: 2/10 MS: 1 ChangeBit- 00:07:13.561 [2024-07-15 20:57:40.657026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000300 cdw11:00000000 00:07:13.561 [2024-07-15 20:57:40.657054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.561 #42 NEW cov: 12080 ft: 14675 corp: 31/107b lim: 10 exec/s: 42 rss: 72Mb L: 2/10 MS: 2 EraseBytes-CrossOver- 00:07:13.561 [2024-07-15 20:57:40.697686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000007ff cdw11:00000000 00:07:13.561 [2024-07-15 20:57:40.697712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.561 [2024-07-15 20:57:40.697828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:13.561 [2024-07-15 20:57:40.697845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.561 [2024-07-15 20:57:40.697965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:13.561 [2024-07-15 20:57:40.697982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.561 [2024-07-15 20:57:40.698098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:13.561 [2024-07-15 20:57:40.698116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:13.561 [2024-07-15 20:57:40.698225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00005a07 cdw11:00000000 00:07:13.561 [2024-07-15 20:57:40.698242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:13.561 #43 NEW cov: 12080 ft: 14682 corp: 32/117b lim: 10 exec/s: 43 rss: 72Mb L: 10/10 MS: 1 CMP- DE: "\377\377\377\377\377\377\377Z"- 00:07:13.561 [2024-07-15 20:57:40.737207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:13.561 [2024-07-15 20:57:40.737235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.561 #44 NEW cov: 12080 ft: 14697 corp: 33/120b lim: 10 exec/s: 44 rss: 73Mb L: 3/10 MS: 1 EraseBytes- 00:07:13.561 [2024-07-15 20:57:40.787734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000003ff cdw11:00000000 00:07:13.561 [2024-07-15 20:57:40.787760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.561 [2024-07-15 20:57:40.787863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fdff cdw11:00000000 00:07:13.561 [2024-07-15 20:57:40.787881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.561 [2024-07-15 20:57:40.787994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff07 cdw11:00000000 00:07:13.561 [2024-07-15 20:57:40.788011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.561 #45 NEW cov: 12080 ft: 14706 corp: 34/126b lim: 10 exec/s: 45 rss: 73Mb L: 6/10 MS: 1 ChangeBit- 00:07:13.561 [2024-07-15 20:57:40.837925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000300 cdw11:00000000 00:07:13.561 [2024-07-15 20:57:40.837951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.561 [2024-07-15 20:57:40.838062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 00:07:13.561 [2024-07-15 20:57:40.838080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.561 [2024-07-15 20:57:40.838195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 00:07:13.561 [2024-07-15 20:57:40.838212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.821 #46 NEW cov: 12080 ft: 14713 corp: 35/132b lim: 10 exec/s: 46 rss: 73Mb L: 6/10 MS: 1 CopyPart- 00:07:13.821 [2024-07-15 20:57:40.877535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a03 cdw11:00000000 00:07:13.821 [2024-07-15 20:57:40.877562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.821 #47 NEW cov: 12080 ft: 14723 corp: 36/135b lim: 10 exec/s: 47 rss: 73Mb L: 3/10 MS: 1 CrossOver- 00:07:13.821 [2024-07-15 20:57:40.917727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:07:13.821 [2024-07-15 20:57:40.917752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.821 #48 NEW cov: 12080 ft: 14777 corp: 37/138b lim: 10 exec/s: 48 rss: 73Mb L: 3/10 MS: 1 InsertByte- 00:07:13.821 [2024-07-15 20:57:40.958078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a07 cdw11:00000000 00:07:13.821 [2024-07-15 20:57:40.958105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.821 [2024-07-15 20:57:40.958214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ecec cdw11:00000000 00:07:13.821 [2024-07-15 20:57:40.958232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.821 #49 NEW cov: 12080 ft: 14802 corp: 38/143b lim: 10 exec/s: 49 rss: 73Mb L: 5/10 MS: 1 InsertRepeatedBytes- 00:07:13.821 [2024-07-15 20:57:40.998189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a03 cdw11:00000000 00:07:13.821 [2024-07-15 20:57:40.998215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.821 [2024-07-15 20:57:40.998335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000600a cdw11:00000000 00:07:13.821 [2024-07-15 20:57:40.998352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.821 [2024-07-15 20:57:41.048113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a60 cdw11:00000000 00:07:13.821 [2024-07-15 20:57:41.048138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.821 #51 NEW cov: 12080 ft: 14817 corp: 39/146b lim: 10 exec/s: 25 rss: 73Mb L: 3/10 MS: 2 InsertByte-EraseBytes- 00:07:13.821 #51 DONE cov: 12080 ft: 14817 corp: 39/146b lim: 10 exec/s: 25 rss: 73Mb 00:07:13.821 ###### Recommended dictionary. ###### 00:07:13.821 "\000\000" # Uses: 5 00:07:13.821 "\377\377\377\377\377\377\377Z" # Uses: 0 00:07:13.821 ###### End of recommended dictionary. ###### 00:07:13.821 Done 51 runs in 2 second(s) 00:07:14.080 20:57:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:07:14.080 20:57:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:14.080 20:57:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:14.080 20:57:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:07:14.080 20:57:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:07:14.080 20:57:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:14.080 20:57:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:14.080 20:57:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:14.080 20:57:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:07:14.080 20:57:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:14.080 20:57:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:14.080 20:57:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:07:14.080 20:57:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:07:14.080 20:57:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:14.080 20:57:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:07:14.080 20:57:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:14.080 20:57:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:14.080 20:57:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:14.080 20:57:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:07:14.080 [2024-07-15 20:57:41.237734] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:14.080 [2024-07-15 20:57:41.237818] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid783768 ] 00:07:14.080 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.340 [2024-07-15 20:57:41.416349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.340 [2024-07-15 20:57:41.481731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.340 [2024-07-15 20:57:41.540581] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.340 [2024-07-15 20:57:41.556846] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:07:14.340 INFO: Running with entropic power schedule (0xFF, 100). 00:07:14.340 INFO: Seed: 2762486145 00:07:14.340 INFO: Loaded 1 modules (358191 inline 8-bit counters): 358191 [0x29b254c, 0x2a09c7b), 00:07:14.340 INFO: Loaded 1 PC tables (358191 PCs): 358191 [0x2a09c80,0x2f80f70), 00:07:14.340 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:14.340 INFO: A corpus is not provided, starting from an empty corpus 00:07:14.340 #2 INITED exec/s: 0 rss: 63Mb 00:07:14.340 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:14.340 This may also happen if the target rejected all inputs we tried so far 00:07:14.340 [2024-07-15 20:57:41.601996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a01 cdw11:00000000 00:07:14.340 [2024-07-15 20:57:41.602024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.860 NEW_FUNC[1/695]: 0x48f380 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:07:14.860 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:14.860 #5 NEW cov: 11836 ft: 11837 corp: 2/3b lim: 10 exec/s: 0 rss: 70Mb L: 2/2 MS: 3 ChangeBinInt-ChangeBinInt-CrossOver- 00:07:14.860 [2024-07-15 20:57:41.932819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a01 cdw11:00000000 00:07:14.860 [2024-07-15 20:57:41.932849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.860 #6 NEW cov: 11966 ft: 12376 corp: 3/6b lim: 10 exec/s: 0 rss: 70Mb L: 3/3 MS: 1 InsertByte- 00:07:14.860 [2024-07-15 20:57:41.983128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a01 cdw11:00000000 00:07:14.860 [2024-07-15 20:57:41.983155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.860 [2024-07-15 20:57:41.983205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005f5f cdw11:00000000 00:07:14.860 [2024-07-15 20:57:41.983219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.860 [2024-07-15 20:57:41.983267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00005f5f cdw11:00000000 00:07:14.860 [2024-07-15 20:57:41.983280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.860 #7 NEW cov: 11972 ft: 12851 corp: 4/13b lim: 10 exec/s: 0 rss: 70Mb L: 7/7 MS: 1 InsertRepeatedBytes- 00:07:14.860 [2024-07-15 20:57:42.033056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aca cdw11:00000000 00:07:14.860 [2024-07-15 20:57:42.033081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.860 #8 NEW cov: 12057 ft: 13230 corp: 5/16b lim: 10 exec/s: 0 rss: 70Mb L: 3/7 MS: 1 InsertByte- 00:07:14.860 [2024-07-15 20:57:42.073352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ee01 cdw11:00000000 00:07:14.860 [2024-07-15 20:57:42.073377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.860 [2024-07-15 20:57:42.073428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005f5f cdw11:00000000 00:07:14.860 [2024-07-15 20:57:42.073447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.860 [2024-07-15 20:57:42.073497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00005f5f cdw11:00000000 00:07:14.860 [2024-07-15 20:57:42.073511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.860 #9 NEW cov: 12057 ft: 13301 corp: 6/23b lim: 10 exec/s: 0 rss: 70Mb L: 7/7 MS: 1 ChangeByte- 00:07:14.860 [2024-07-15 20:57:42.123543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ee01 cdw11:00000000 00:07:14.860 [2024-07-15 20:57:42.123567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.860 [2024-07-15 20:57:42.123619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005f5f cdw11:00000000 00:07:14.860 [2024-07-15 20:57:42.123633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.860 [2024-07-15 20:57:42.123681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00005f2b cdw11:00000000 00:07:14.860 [2024-07-15 20:57:42.123694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.120 #10 NEW cov: 12057 ft: 13446 corp: 7/30b lim: 10 exec/s: 0 rss: 70Mb L: 7/7 MS: 1 ChangeByte- 00:07:15.120 [2024-07-15 20:57:42.173562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a01 cdw11:00000000 00:07:15.120 [2024-07-15 20:57:42.173588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.120 [2024-07-15 20:57:42.173639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a01 cdw11:00000000 00:07:15.120 [2024-07-15 20:57:42.173653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.120 #11 NEW cov: 12057 ft: 13652 corp: 8/34b lim: 10 exec/s: 0 rss: 70Mb L: 4/7 MS: 1 CopyPart- 00:07:15.120 [2024-07-15 20:57:42.213804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ee01 cdw11:00000000 00:07:15.120 [2024-07-15 20:57:42.213830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.120 [2024-07-15 20:57:42.213880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005f5f cdw11:00000000 00:07:15.120 [2024-07-15 20:57:42.213893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.120 [2024-07-15 20:57:42.213941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00005f2b cdw11:00000000 00:07:15.120 [2024-07-15 20:57:42.213954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.120 #12 NEW cov: 12057 ft: 13735 corp: 9/41b lim: 10 exec/s: 0 rss: 71Mb L: 7/7 MS: 1 CrossOver- 00:07:15.120 [2024-07-15 20:57:42.264029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ee01 cdw11:00000000 00:07:15.120 [2024-07-15 20:57:42.264053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.120 [2024-07-15 20:57:42.264102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005f5f cdw11:00000000 00:07:15.120 [2024-07-15 20:57:42.264115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.120 [2024-07-15 20:57:42.264162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00005f2b cdw11:00000000 00:07:15.120 [2024-07-15 20:57:42.264176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.120 [2024-07-15 20:57:42.264222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ab0a cdw11:00000000 00:07:15.120 [2024-07-15 20:57:42.264234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.120 #13 NEW cov: 12057 ft: 13982 corp: 10/49b lim: 10 exec/s: 0 rss: 71Mb L: 8/8 MS: 1 InsertByte- 00:07:15.120 [2024-07-15 20:57:42.314181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ee01 cdw11:00000000 00:07:15.120 [2024-07-15 20:57:42.314205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.120 [2024-07-15 20:57:42.314255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005f5f cdw11:00000000 00:07:15.120 [2024-07-15 20:57:42.314268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.120 [2024-07-15 20:57:42.314318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00005f2b cdw11:00000000 00:07:15.120 [2024-07-15 20:57:42.314331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.120 [2024-07-15 20:57:42.314377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:000001ee cdw11:00000000 00:07:15.120 [2024-07-15 20:57:42.314391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.120 #14 NEW cov: 12057 ft: 14021 corp: 11/58b lim: 10 exec/s: 0 rss: 71Mb L: 9/9 MS: 1 CopyPart- 00:07:15.120 [2024-07-15 20:57:42.354274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ee01 cdw11:00000000 00:07:15.120 [2024-07-15 20:57:42.354298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.120 [2024-07-15 20:57:42.354348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00004b5f cdw11:00000000 00:07:15.120 [2024-07-15 20:57:42.354361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.120 [2024-07-15 20:57:42.354410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00005f2b cdw11:00000000 00:07:15.120 [2024-07-15 20:57:42.354423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.120 [2024-07-15 20:57:42.354476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:000001ee cdw11:00000000 00:07:15.121 [2024-07-15 20:57:42.354490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.121 #15 NEW cov: 12057 ft: 14047 corp: 12/67b lim: 10 exec/s: 0 rss: 71Mb L: 9/9 MS: 1 ChangeByte- 00:07:15.121 [2024-07-15 20:57:42.404107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a81 cdw11:00000000 00:07:15.121 [2024-07-15 20:57:42.404132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.380 #16 NEW cov: 12057 ft: 14073 corp: 13/69b lim: 10 exec/s: 0 rss: 71Mb L: 2/9 MS: 1 ChangeBit- 00:07:15.380 [2024-07-15 20:57:42.444517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ee01 cdw11:00000000 00:07:15.380 [2024-07-15 20:57:42.444541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.380 [2024-07-15 20:57:42.444591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00004b5f cdw11:00000000 00:07:15.381 [2024-07-15 20:57:42.444605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.381 [2024-07-15 20:57:42.444652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00005f2d cdw11:00000000 00:07:15.381 [2024-07-15 20:57:42.444666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.381 [2024-07-15 20:57:42.444721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:000001ee cdw11:00000000 00:07:15.381 [2024-07-15 20:57:42.444734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.381 #17 NEW cov: 12057 ft: 14090 corp: 14/78b lim: 10 exec/s: 0 rss: 71Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:15.381 [2024-07-15 20:57:42.494664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ee01 cdw11:00000000 00:07:15.381 [2024-07-15 20:57:42.494690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.381 [2024-07-15 20:57:42.494741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005f5f cdw11:00000000 00:07:15.381 [2024-07-15 20:57:42.494755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.381 [2024-07-15 20:57:42.494803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00005f01 cdw11:00000000 00:07:15.381 [2024-07-15 20:57:42.494817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.381 [2024-07-15 20:57:42.494865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00002bee cdw11:00000000 00:07:15.381 [2024-07-15 20:57:42.494879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.381 NEW_FUNC[1/1]: 0x1a7d240 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:15.381 #18 NEW cov: 12080 ft: 14125 corp: 15/87b lim: 10 exec/s: 0 rss: 71Mb L: 9/9 MS: 1 ShuffleBytes- 00:07:15.381 [2024-07-15 20:57:42.534438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:15.381 [2024-07-15 20:57:42.534466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.381 #19 NEW cov: 12080 ft: 14185 corp: 16/90b lim: 10 exec/s: 0 rss: 71Mb L: 3/9 MS: 1 CrossOver- 00:07:15.381 [2024-07-15 20:57:42.574995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ee01 cdw11:00000000 00:07:15.381 [2024-07-15 20:57:42.575020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.381 [2024-07-15 20:57:42.575070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005f5f cdw11:00000000 00:07:15.381 [2024-07-15 20:57:42.575083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.381 [2024-07-15 20:57:42.575132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00005f2b cdw11:00000000 00:07:15.381 [2024-07-15 20:57:42.575146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.381 [2024-07-15 20:57:42.575196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:000001ee cdw11:00000000 00:07:15.381 [2024-07-15 20:57:42.575213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.381 [2024-07-15 20:57:42.575262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00002301 cdw11:00000000 00:07:15.381 [2024-07-15 20:57:42.575276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:15.381 #20 NEW cov: 12080 ft: 14254 corp: 17/100b lim: 10 exec/s: 20 rss: 71Mb L: 10/10 MS: 1 InsertByte- 00:07:15.381 [2024-07-15 20:57:42.614696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a01 cdw11:00000000 00:07:15.381 [2024-07-15 20:57:42.614725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.381 #21 NEW cov: 12080 ft: 14278 corp: 18/103b lim: 10 exec/s: 21 rss: 71Mb L: 3/10 MS: 1 CrossOver- 00:07:15.381 [2024-07-15 20:57:42.654981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:15.381 [2024-07-15 20:57:42.655006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.381 [2024-07-15 20:57:42.655054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000010a cdw11:00000000 00:07:15.381 [2024-07-15 20:57:42.655068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.381 [2024-07-15 20:57:42.655117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000a01 cdw11:00000000 00:07:15.381 [2024-07-15 20:57:42.655130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.641 #22 NEW cov: 12080 ft: 14290 corp: 19/109b lim: 10 exec/s: 22 rss: 71Mb L: 6/10 MS: 1 CopyPart- 00:07:15.641 [2024-07-15 20:57:42.705039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002f0a cdw11:00000000 00:07:15.641 [2024-07-15 20:57:42.705063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.641 [2024-07-15 20:57:42.705113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000010a cdw11:00000000 00:07:15.641 [2024-07-15 20:57:42.705127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.641 #23 NEW cov: 12080 ft: 14324 corp: 20/114b lim: 10 exec/s: 23 rss: 71Mb L: 5/10 MS: 1 InsertByte- 00:07:15.641 [2024-07-15 20:57:42.755542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ee01 cdw11:00000000 00:07:15.641 [2024-07-15 20:57:42.755566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.641 [2024-07-15 20:57:42.755614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005f5f cdw11:00000000 00:07:15.641 [2024-07-15 20:57:42.755628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.641 [2024-07-15 20:57:42.755677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000015f cdw11:00000000 00:07:15.641 [2024-07-15 20:57:42.755690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.641 [2024-07-15 20:57:42.755740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00002b01 cdw11:00000000 00:07:15.641 [2024-07-15 20:57:42.755752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.641 [2024-07-15 20:57:42.755817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ee01 cdw11:00000000 00:07:15.641 [2024-07-15 20:57:42.755830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:15.641 #24 NEW cov: 12080 ft: 14333 corp: 21/124b lim: 10 exec/s: 24 rss: 71Mb L: 10/10 MS: 1 CopyPart- 00:07:15.641 [2024-07-15 20:57:42.795305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005f2b cdw11:00000000 00:07:15.641 [2024-07-15 20:57:42.795329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.641 [2024-07-15 20:57:42.795378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ab0a cdw11:00000000 00:07:15.641 [2024-07-15 20:57:42.795394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.641 #25 NEW cov: 12080 ft: 14348 corp: 22/128b lim: 10 exec/s: 25 rss: 72Mb L: 4/10 MS: 1 EraseBytes- 00:07:15.641 [2024-07-15 20:57:42.845474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000990a cdw11:00000000 00:07:15.641 [2024-07-15 20:57:42.845498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.641 [2024-07-15 20:57:42.845551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ca01 cdw11:00000000 00:07:15.641 [2024-07-15 20:57:42.845565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.641 #26 NEW cov: 12080 ft: 14358 corp: 23/132b lim: 10 exec/s: 26 rss: 72Mb L: 4/10 MS: 1 InsertByte- 00:07:15.641 [2024-07-15 20:57:42.895928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ee01 cdw11:00000000 00:07:15.641 [2024-07-15 20:57:42.895953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.641 [2024-07-15 20:57:42.896004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00004b5f cdw11:00000000 00:07:15.641 [2024-07-15 20:57:42.896018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.641 [2024-07-15 20:57:42.896066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00005f2b cdw11:00000000 00:07:15.641 [2024-07-15 20:57:42.896079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.641 [2024-07-15 20:57:42.896129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000101 cdw11:00000000 00:07:15.641 [2024-07-15 20:57:42.896142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.641 [2024-07-15 20:57:42.896189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ee01 cdw11:00000000 00:07:15.641 [2024-07-15 20:57:42.896203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:15.641 #27 NEW cov: 12080 ft: 14371 corp: 24/142b lim: 10 exec/s: 27 rss: 72Mb L: 10/10 MS: 1 CopyPart- 00:07:15.901 [2024-07-15 20:57:42.935804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a01 cdw11:00000000 00:07:15.901 [2024-07-15 20:57:42.935830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.901 [2024-07-15 20:57:42.935881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ee01 cdw11:00000000 00:07:15.901 [2024-07-15 20:57:42.935895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.901 [2024-07-15 20:57:42.935946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00005f5f cdw11:00000000 00:07:15.901 [2024-07-15 20:57:42.935960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.901 #28 NEW cov: 12080 ft: 14408 corp: 25/149b lim: 10 exec/s: 28 rss: 72Mb L: 7/10 MS: 1 CrossOver- 00:07:15.901 [2024-07-15 20:57:42.975915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000012fe cdw11:00000000 00:07:15.901 [2024-07-15 20:57:42.975939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.901 [2024-07-15 20:57:42.975988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005f5f cdw11:00000000 00:07:15.901 [2024-07-15 20:57:42.976005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.901 [2024-07-15 20:57:42.976053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00005f2b cdw11:00000000 00:07:15.901 [2024-07-15 20:57:42.976067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.901 #29 NEW cov: 12080 ft: 14440 corp: 26/156b lim: 10 exec/s: 29 rss: 72Mb L: 7/10 MS: 1 ChangeBinInt- 00:07:15.901 [2024-07-15 20:57:43.016252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ee01 cdw11:00000000 00:07:15.901 [2024-07-15 20:57:43.016276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.901 [2024-07-15 20:57:43.016342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005f5f cdw11:00000000 00:07:15.901 [2024-07-15 20:57:43.016355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.901 [2024-07-15 20:57:43.016404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000015f cdw11:00000000 00:07:15.901 [2024-07-15 20:57:43.016418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.901 [2024-07-15 20:57:43.016471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00005f5f cdw11:00000000 00:07:15.901 [2024-07-15 20:57:43.016485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.901 [2024-07-15 20:57:43.016534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000015f cdw11:00000000 00:07:15.901 [2024-07-15 20:57:43.016548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:15.901 #30 NEW cov: 12080 ft: 14456 corp: 27/166b lim: 10 exec/s: 30 rss: 72Mb L: 10/10 MS: 1 CopyPart- 00:07:15.901 [2024-07-15 20:57:43.066217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000125f cdw11:00000000 00:07:15.901 [2024-07-15 20:57:43.066241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.901 [2024-07-15 20:57:43.066291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005f5f cdw11:00000000 00:07:15.901 [2024-07-15 20:57:43.066304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.901 [2024-07-15 20:57:43.066354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000fe2b cdw11:00000000 00:07:15.901 [2024-07-15 20:57:43.066367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.901 #31 NEW cov: 12080 ft: 14467 corp: 28/173b lim: 10 exec/s: 31 rss: 72Mb L: 7/10 MS: 1 ShuffleBytes- 00:07:15.901 [2024-07-15 20:57:43.116326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a01 cdw11:00000000 00:07:15.901 [2024-07-15 20:57:43.116349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.901 [2024-07-15 20:57:43.116399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000db5f cdw11:00000000 00:07:15.901 [2024-07-15 20:57:43.116412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.901 [2024-07-15 20:57:43.116465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00005f5f cdw11:00000000 00:07:15.901 [2024-07-15 20:57:43.116494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.901 #32 NEW cov: 12080 ft: 14476 corp: 29/180b lim: 10 exec/s: 32 rss: 72Mb L: 7/10 MS: 1 ChangeByte- 00:07:15.901 [2024-07-15 20:57:43.156678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ee01 cdw11:00000000 00:07:15.901 [2024-07-15 20:57:43.156702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.901 [2024-07-15 20:57:43.156750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005f5f cdw11:00000000 00:07:15.901 [2024-07-15 20:57:43.156764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.901 [2024-07-15 20:57:43.156811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00005f5f cdw11:00000000 00:07:15.901 [2024-07-15 20:57:43.156824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.901 [2024-07-15 20:57:43.156872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00005f2b cdw11:00000000 00:07:15.901 [2024-07-15 20:57:43.156886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.901 [2024-07-15 20:57:43.156934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00002301 cdw11:00000000 00:07:15.901 [2024-07-15 20:57:43.156948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:15.901 #33 NEW cov: 12080 ft: 14485 corp: 30/190b lim: 10 exec/s: 33 rss: 72Mb L: 10/10 MS: 1 CopyPart- 00:07:16.161 [2024-07-15 20:57:43.206821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ee01 cdw11:00000000 00:07:16.161 [2024-07-15 20:57:43.206847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.161 [2024-07-15 20:57:43.206899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005f5f cdw11:00000000 00:07:16.161 [2024-07-15 20:57:43.206913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.161 [2024-07-15 20:57:43.206963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000174 cdw11:00000000 00:07:16.161 [2024-07-15 20:57:43.206977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.161 [2024-07-15 20:57:43.207025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00002b01 cdw11:00000000 00:07:16.161 [2024-07-15 20:57:43.207039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.161 [2024-07-15 20:57:43.207086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ee01 cdw11:00000000 00:07:16.161 [2024-07-15 20:57:43.207101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:16.161 #34 NEW cov: 12080 ft: 14495 corp: 31/200b lim: 10 exec/s: 34 rss: 72Mb L: 10/10 MS: 1 ChangeByte- 00:07:16.161 [2024-07-15 20:57:43.246903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ee01 cdw11:00000000 00:07:16.161 [2024-07-15 20:57:43.246928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.161 [2024-07-15 20:57:43.246978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005f5f cdw11:00000000 00:07:16.161 [2024-07-15 20:57:43.246993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.161 [2024-07-15 20:57:43.247044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00002b5f cdw11:00000000 00:07:16.161 [2024-07-15 20:57:43.247061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.161 [2024-07-15 20:57:43.247109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00005f23 cdw11:00000000 00:07:16.161 [2024-07-15 20:57:43.247123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.161 [2024-07-15 20:57:43.247172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00005f01 cdw11:00000000 00:07:16.161 [2024-07-15 20:57:43.247186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:16.161 #35 NEW cov: 12080 ft: 14502 corp: 32/210b lim: 10 exec/s: 35 rss: 72Mb L: 10/10 MS: 1 ShuffleBytes- 00:07:16.161 [2024-07-15 20:57:43.297091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ee01 cdw11:00000000 00:07:16.161 [2024-07-15 20:57:43.297116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.161 [2024-07-15 20:57:43.297168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00004b5f cdw11:00000000 00:07:16.161 [2024-07-15 20:57:43.297182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.161 [2024-07-15 20:57:43.297230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00005f2b cdw11:00000000 00:07:16.161 [2024-07-15 20:57:43.297244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.161 [2024-07-15 20:57:43.297294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000101 cdw11:00000000 00:07:16.161 [2024-07-15 20:57:43.297307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.161 [2024-07-15 20:57:43.297357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00007a01 cdw11:00000000 00:07:16.161 [2024-07-15 20:57:43.297370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:16.161 #41 NEW cov: 12080 ft: 14559 corp: 33/220b lim: 10 exec/s: 41 rss: 72Mb L: 10/10 MS: 1 ChangeByte- 00:07:16.161 [2024-07-15 20:57:43.347205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ee01 cdw11:00000000 00:07:16.161 [2024-07-15 20:57:43.347230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.161 [2024-07-15 20:57:43.347282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005f5f cdw11:00000000 00:07:16.162 [2024-07-15 20:57:43.347306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.162 [2024-07-15 20:57:43.347355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000015f cdw11:00000000 00:07:16.162 [2024-07-15 20:57:43.347368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.162 [2024-07-15 20:57:43.347415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00002b01 cdw11:00000000 00:07:16.162 [2024-07-15 20:57:43.347429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.162 [2024-07-15 20:57:43.347497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000a81 cdw11:00000000 00:07:16.162 [2024-07-15 20:57:43.347511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:16.162 #42 NEW cov: 12080 ft: 14564 corp: 34/230b lim: 10 exec/s: 42 rss: 72Mb L: 10/10 MS: 1 CrossOver- 00:07:16.162 [2024-07-15 20:57:43.386857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a01 cdw11:00000000 00:07:16.162 [2024-07-15 20:57:43.386881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.162 #43 NEW cov: 12080 ft: 14567 corp: 35/232b lim: 10 exec/s: 43 rss: 72Mb L: 2/10 MS: 1 EraseBytes- 00:07:16.162 [2024-07-15 20:57:43.427201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a01 cdw11:00000000 00:07:16.162 [2024-07-15 20:57:43.427225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.162 [2024-07-15 20:57:43.427274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:16.162 [2024-07-15 20:57:43.427288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.162 [2024-07-15 20:57:43.427337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000a01 cdw11:00000000 00:07:16.162 [2024-07-15 20:57:43.427350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.421 #44 NEW cov: 12080 ft: 14570 corp: 36/238b lim: 10 exec/s: 44 rss: 73Mb L: 6/10 MS: 1 ShuffleBytes- 00:07:16.421 [2024-07-15 20:57:43.477113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a05 cdw11:00000000 00:07:16.421 [2024-07-15 20:57:43.477137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.421 #45 NEW cov: 12080 ft: 14582 corp: 37/241b lim: 10 exec/s: 45 rss: 73Mb L: 3/10 MS: 1 ChangeBit- 00:07:16.421 [2024-07-15 20:57:43.527608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003e3e cdw11:00000000 00:07:16.421 [2024-07-15 20:57:43.527633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.421 [2024-07-15 20:57:43.527684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00003e3e cdw11:00000000 00:07:16.421 [2024-07-15 20:57:43.527698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.421 [2024-07-15 20:57:43.527747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00003e3e cdw11:00000000 00:07:16.421 [2024-07-15 20:57:43.527760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.421 [2024-07-15 20:57:43.527808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00003e0a cdw11:00000000 00:07:16.421 [2024-07-15 20:57:43.527820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.421 #46 NEW cov: 12080 ft: 14599 corp: 38/250b lim: 10 exec/s: 46 rss: 73Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:07:16.421 [2024-07-15 20:57:43.577397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ee01 cdw11:00000000 00:07:16.421 [2024-07-15 20:57:43.577422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.421 #47 NEW cov: 12080 ft: 14608 corp: 39/252b lim: 10 exec/s: 23 rss: 73Mb L: 2/10 MS: 1 ChangeBinInt- 00:07:16.421 #47 DONE cov: 12080 ft: 14608 corp: 39/252b lim: 10 exec/s: 23 rss: 73Mb 00:07:16.421 Done 47 runs in 2 second(s) 00:07:16.679 20:57:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:07:16.679 20:57:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:16.679 20:57:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:16.679 20:57:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:07:16.679 20:57:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:07:16.679 20:57:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:16.679 20:57:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:16.679 20:57:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:16.679 20:57:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:07:16.679 20:57:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:16.679 20:57:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:16.679 20:57:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:07:16.679 20:57:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:07:16.679 20:57:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:16.679 20:57:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:07:16.679 20:57:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:16.679 20:57:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:16.679 20:57:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:16.680 20:57:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:07:16.680 [2024-07-15 20:57:43.764868] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:16.680 [2024-07-15 20:57:43.764954] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784299 ] 00:07:16.680 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.680 [2024-07-15 20:57:43.942540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.938 [2024-07-15 20:57:44.008521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.938 [2024-07-15 20:57:44.067875] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.939 [2024-07-15 20:57:44.084174] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:07:16.939 INFO: Running with entropic power schedule (0xFF, 100). 00:07:16.939 INFO: Seed: 995536907 00:07:16.939 INFO: Loaded 1 modules (358191 inline 8-bit counters): 358191 [0x29b254c, 0x2a09c7b), 00:07:16.939 INFO: Loaded 1 PC tables (358191 PCs): 358191 [0x2a09c80,0x2f80f70), 00:07:16.939 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:16.939 INFO: A corpus is not provided, starting from an empty corpus 00:07:16.939 [2024-07-15 20:57:44.128848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.939 [2024-07-15 20:57:44.128881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.939 #2 INITED cov: 11864 ft: 11856 corp: 1/1b exec/s: 0 rss: 70Mb 00:07:16.939 [2024-07-15 20:57:44.179000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.939 [2024-07-15 20:57:44.179030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.939 [2024-07-15 20:57:44.179069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.939 [2024-07-15 20:57:44.179085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.939 [2024-07-15 20:57:44.179114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.939 [2024-07-15 20:57:44.179130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.939 [2024-07-15 20:57:44.179159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.939 [2024-07-15 20:57:44.179174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.198 #3 NEW cov: 11994 ft: 13365 corp: 2/5b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:07:17.198 [2024-07-15 20:57:44.259240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.198 [2024-07-15 20:57:44.259271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.198 [2024-07-15 20:57:44.259305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.198 [2024-07-15 20:57:44.259323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.198 [2024-07-15 20:57:44.259352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.198 [2024-07-15 20:57:44.259367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.198 [2024-07-15 20:57:44.259396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.198 [2024-07-15 20:57:44.259411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.198 #4 NEW cov: 12000 ft: 13601 corp: 3/9b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:17.198 [2024-07-15 20:57:44.339394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.198 [2024-07-15 20:57:44.339423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.198 [2024-07-15 20:57:44.339461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.198 [2024-07-15 20:57:44.339477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.198 [2024-07-15 20:57:44.339505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.198 [2024-07-15 20:57:44.339519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.198 [2024-07-15 20:57:44.339547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.198 [2024-07-15 20:57:44.339561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.198 #5 NEW cov: 12085 ft: 13788 corp: 4/13b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:17.198 [2024-07-15 20:57:44.419522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.198 [2024-07-15 20:57:44.419552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.198 [2024-07-15 20:57:44.419584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.198 [2024-07-15 20:57:44.419601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.198 #6 NEW cov: 12085 ft: 14045 corp: 5/15b lim: 5 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 CopyPart- 00:07:17.198 [2024-07-15 20:57:44.479833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.198 [2024-07-15 20:57:44.479862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.198 [2024-07-15 20:57:44.479893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.198 [2024-07-15 20:57:44.479909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.198 [2024-07-15 20:57:44.479936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.198 [2024-07-15 20:57:44.479950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.198 [2024-07-15 20:57:44.479977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.198 [2024-07-15 20:57:44.479992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.458 #7 NEW cov: 12085 ft: 14108 corp: 6/19b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:17.458 [2024-07-15 20:57:44.539793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.458 [2024-07-15 20:57:44.539822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.458 [2024-07-15 20:57:44.539854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.458 [2024-07-15 20:57:44.539885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.458 #8 NEW cov: 12085 ft: 14189 corp: 7/21b lim: 5 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 ChangeByte- 00:07:17.458 [2024-07-15 20:57:44.620143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.458 [2024-07-15 20:57:44.620173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.458 [2024-07-15 20:57:44.620206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.458 [2024-07-15 20:57:44.620222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.458 [2024-07-15 20:57:44.620251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.458 [2024-07-15 20:57:44.620270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.458 [2024-07-15 20:57:44.620299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.458 [2024-07-15 20:57:44.620314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.458 #9 NEW cov: 12085 ft: 14213 corp: 8/25b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:07:17.458 [2024-07-15 20:57:44.670272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.458 [2024-07-15 20:57:44.670300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.458 [2024-07-15 20:57:44.670332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.458 [2024-07-15 20:57:44.670347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.458 [2024-07-15 20:57:44.670374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.458 [2024-07-15 20:57:44.670388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.458 [2024-07-15 20:57:44.670415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.458 [2024-07-15 20:57:44.670429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.458 [2024-07-15 20:57:44.670462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.458 [2024-07-15 20:57:44.670493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:17.458 #10 NEW cov: 12085 ft: 14312 corp: 9/30b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 InsertByte- 00:07:17.458 [2024-07-15 20:57:44.720362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.458 [2024-07-15 20:57:44.720391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.459 [2024-07-15 20:57:44.720423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.459 [2024-07-15 20:57:44.720438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.459 [2024-07-15 20:57:44.720488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.459 [2024-07-15 20:57:44.720504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.459 [2024-07-15 20:57:44.720532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.459 [2024-07-15 20:57:44.720547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.719 #11 NEW cov: 12085 ft: 14345 corp: 10/34b lim: 5 exec/s: 0 rss: 70Mb L: 4/5 MS: 1 ChangeBinInt- 00:07:17.719 [2024-07-15 20:57:44.800603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.719 [2024-07-15 20:57:44.800637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.719 [2024-07-15 20:57:44.800673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.719 [2024-07-15 20:57:44.800689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.719 [2024-07-15 20:57:44.800719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.719 [2024-07-15 20:57:44.800734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.719 [2024-07-15 20:57:44.800762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.719 [2024-07-15 20:57:44.800778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.719 #12 NEW cov: 12085 ft: 14394 corp: 11/38b lim: 5 exec/s: 0 rss: 70Mb L: 4/5 MS: 1 ShuffleBytes- 00:07:17.719 [2024-07-15 20:57:44.880819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.719 [2024-07-15 20:57:44.880850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.719 [2024-07-15 20:57:44.880883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.719 [2024-07-15 20:57:44.880898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.719 [2024-07-15 20:57:44.880926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.719 [2024-07-15 20:57:44.880941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.719 [2024-07-15 20:57:44.880968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.719 [2024-07-15 20:57:44.880984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.719 #13 NEW cov: 12085 ft: 14430 corp: 12/42b lim: 5 exec/s: 0 rss: 70Mb L: 4/5 MS: 1 ChangeBit- 00:07:17.719 [2024-07-15 20:57:44.960987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.719 [2024-07-15 20:57:44.961016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.719 [2024-07-15 20:57:44.961048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.719 [2024-07-15 20:57:44.961064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.719 [2024-07-15 20:57:44.961091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.719 [2024-07-15 20:57:44.961106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.719 [2024-07-15 20:57:44.961134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.719 [2024-07-15 20:57:44.961152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.719 #14 NEW cov: 12085 ft: 14455 corp: 13/46b lim: 5 exec/s: 0 rss: 70Mb L: 4/5 MS: 1 ChangeByte- 00:07:17.979 [2024-07-15 20:57:45.011307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.979 [2024-07-15 20:57:45.011338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.979 [2024-07-15 20:57:45.011372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.979 [2024-07-15 20:57:45.011389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.979 [2024-07-15 20:57:45.011418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.979 [2024-07-15 20:57:45.011434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.979 [2024-07-15 20:57:45.011470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.979 [2024-07-15 20:57:45.011485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.979 [2024-07-15 20:57:45.011515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.979 [2024-07-15 20:57:45.011530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:18.238 NEW_FUNC[1/1]: 0x1a7d240 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:18.238 #15 NEW cov: 12108 ft: 14520 corp: 14/51b lim: 5 exec/s: 15 rss: 71Mb L: 5/5 MS: 1 ShuffleBytes- 00:07:18.238 [2024-07-15 20:57:45.352122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.238 [2024-07-15 20:57:45.352158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.238 [2024-07-15 20:57:45.352191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.238 [2024-07-15 20:57:45.352207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.238 [2024-07-15 20:57:45.352234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.238 [2024-07-15 20:57:45.352249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.238 [2024-07-15 20:57:45.352276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.238 [2024-07-15 20:57:45.352290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.238 #16 NEW cov: 12108 ft: 14545 corp: 15/55b lim: 5 exec/s: 16 rss: 71Mb L: 4/5 MS: 1 ChangeByte- 00:07:18.238 [2024-07-15 20:57:45.412243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.238 [2024-07-15 20:57:45.412274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.238 [2024-07-15 20:57:45.412312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.238 [2024-07-15 20:57:45.412328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.238 [2024-07-15 20:57:45.412357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.238 [2024-07-15 20:57:45.412372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.238 [2024-07-15 20:57:45.412400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.238 [2024-07-15 20:57:45.412416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.238 [2024-07-15 20:57:45.412450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.238 [2024-07-15 20:57:45.412466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:18.238 #17 NEW cov: 12108 ft: 14574 corp: 16/60b lim: 5 exec/s: 17 rss: 71Mb L: 5/5 MS: 1 ChangeByte- 00:07:18.238 [2024-07-15 20:57:45.462272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.238 [2024-07-15 20:57:45.462302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.238 [2024-07-15 20:57:45.462334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.238 [2024-07-15 20:57:45.462349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.238 [2024-07-15 20:57:45.462377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.238 [2024-07-15 20:57:45.462391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.238 [2024-07-15 20:57:45.462418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.238 [2024-07-15 20:57:45.462432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.238 #18 NEW cov: 12108 ft: 14594 corp: 17/64b lim: 5 exec/s: 18 rss: 71Mb L: 4/5 MS: 1 ChangeByte- 00:07:18.498 [2024-07-15 20:57:45.542574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.498 [2024-07-15 20:57:45.542604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.498 [2024-07-15 20:57:45.542636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.498 [2024-07-15 20:57:45.542651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.498 [2024-07-15 20:57:45.542679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.498 [2024-07-15 20:57:45.542693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.498 [2024-07-15 20:57:45.542725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.498 [2024-07-15 20:57:45.542740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.498 [2024-07-15 20:57:45.542767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.498 [2024-07-15 20:57:45.542781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:18.498 #19 NEW cov: 12108 ft: 14641 corp: 18/69b lim: 5 exec/s: 19 rss: 71Mb L: 5/5 MS: 1 ShuffleBytes- 00:07:18.498 [2024-07-15 20:57:45.592546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.498 [2024-07-15 20:57:45.592575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.498 [2024-07-15 20:57:45.592609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.498 [2024-07-15 20:57:45.592624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.498 #20 NEW cov: 12108 ft: 14652 corp: 19/71b lim: 5 exec/s: 20 rss: 72Mb L: 2/5 MS: 1 EraseBytes- 00:07:18.498 [2024-07-15 20:57:45.672776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.498 [2024-07-15 20:57:45.672804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.498 [2024-07-15 20:57:45.672836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.498 [2024-07-15 20:57:45.672851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.498 #21 NEW cov: 12108 ft: 14761 corp: 20/73b lim: 5 exec/s: 21 rss: 72Mb L: 2/5 MS: 1 EraseBytes- 00:07:18.498 [2024-07-15 20:57:45.733086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.498 [2024-07-15 20:57:45.733115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.498 [2024-07-15 20:57:45.733149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.498 [2024-07-15 20:57:45.733164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.498 [2024-07-15 20:57:45.733193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.498 [2024-07-15 20:57:45.733208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.498 [2024-07-15 20:57:45.733236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.498 [2024-07-15 20:57:45.733251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.498 [2024-07-15 20:57:45.733280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.498 [2024-07-15 20:57:45.733295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:18.498 #22 NEW cov: 12108 ft: 14782 corp: 21/78b lim: 5 exec/s: 22 rss: 72Mb L: 5/5 MS: 1 CopyPart- 00:07:18.758 [2024-07-15 20:57:45.793265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.758 [2024-07-15 20:57:45.793295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.758 [2024-07-15 20:57:45.793328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.758 [2024-07-15 20:57:45.793345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.758 [2024-07-15 20:57:45.793373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.758 [2024-07-15 20:57:45.793388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.758 [2024-07-15 20:57:45.793417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.758 [2024-07-15 20:57:45.793432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.758 [2024-07-15 20:57:45.793468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.758 [2024-07-15 20:57:45.793499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:18.758 #23 NEW cov: 12108 ft: 14821 corp: 22/83b lim: 5 exec/s: 23 rss: 72Mb L: 5/5 MS: 1 ShuffleBytes- 00:07:18.758 [2024-07-15 20:57:45.873367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.758 [2024-07-15 20:57:45.873396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.758 [2024-07-15 20:57:45.873427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.758 [2024-07-15 20:57:45.873449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.758 [2024-07-15 20:57:45.873493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.758 [2024-07-15 20:57:45.873508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.758 [2024-07-15 20:57:45.873536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.758 [2024-07-15 20:57:45.873551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.758 #24 NEW cov: 12108 ft: 14832 corp: 23/87b lim: 5 exec/s: 24 rss: 72Mb L: 4/5 MS: 1 ChangeByte- 00:07:18.758 [2024-07-15 20:57:45.923456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.758 [2024-07-15 20:57:45.923485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.758 [2024-07-15 20:57:45.923518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.758 [2024-07-15 20:57:45.923538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.758 [2024-07-15 20:57:45.923567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.758 [2024-07-15 20:57:45.923582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.758 #25 NEW cov: 12108 ft: 15000 corp: 24/90b lim: 5 exec/s: 25 rss: 72Mb L: 3/5 MS: 1 EraseBytes- 00:07:18.758 [2024-07-15 20:57:45.983725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.758 [2024-07-15 20:57:45.983755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.758 [2024-07-15 20:57:45.983788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.758 [2024-07-15 20:57:45.983804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.758 [2024-07-15 20:57:45.983833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.758 [2024-07-15 20:57:45.983848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.758 [2024-07-15 20:57:45.983877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.758 [2024-07-15 20:57:45.983891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.758 #26 NEW cov: 12109 ft: 15019 corp: 25/94b lim: 5 exec/s: 26 rss: 72Mb L: 4/5 MS: 1 ShuffleBytes- 00:07:18.758 [2024-07-15 20:57:46.033674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.758 [2024-07-15 20:57:46.033704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.018 #27 NEW cov: 12109 ft: 15031 corp: 26/95b lim: 5 exec/s: 27 rss: 72Mb L: 1/5 MS: 1 ChangeBit- 00:07:19.018 [2024-07-15 20:57:46.094041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.018 [2024-07-15 20:57:46.094070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.018 [2024-07-15 20:57:46.094102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.018 [2024-07-15 20:57:46.094118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.018 [2024-07-15 20:57:46.094145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.018 [2024-07-15 20:57:46.094159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.018 [2024-07-15 20:57:46.094186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.018 [2024-07-15 20:57:46.094201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:19.018 [2024-07-15 20:57:46.094228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.018 [2024-07-15 20:57:46.094246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:19.018 #28 NEW cov: 12109 ft: 15083 corp: 27/100b lim: 5 exec/s: 14 rss: 72Mb L: 5/5 MS: 1 ShuffleBytes- 00:07:19.018 #28 DONE cov: 12109 ft: 15083 corp: 27/100b lim: 5 exec/s: 14 rss: 72Mb 00:07:19.018 Done 28 runs in 2 second(s) 00:07:19.018 20:57:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:07:19.018 20:57:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:19.018 20:57:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:19.018 20:57:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:07:19.018 20:57:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:07:19.018 20:57:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:19.018 20:57:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:19.018 20:57:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:19.018 20:57:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:07:19.018 20:57:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:19.018 20:57:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:19.018 20:57:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:07:19.018 20:57:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:07:19.018 20:57:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:19.018 20:57:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:07:19.019 20:57:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:19.019 20:57:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:19.019 20:57:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:19.019 20:57:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:07:19.278 [2024-07-15 20:57:46.326149] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:19.278 [2024-07-15 20:57:46.326220] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784596 ] 00:07:19.278 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.278 [2024-07-15 20:57:46.522391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.536 [2024-07-15 20:57:46.589266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.536 [2024-07-15 20:57:46.648564] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.536 [2024-07-15 20:57:46.664872] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:07:19.536 INFO: Running with entropic power schedule (0xFF, 100). 00:07:19.537 INFO: Seed: 3576546898 00:07:19.537 INFO: Loaded 1 modules (358191 inline 8-bit counters): 358191 [0x29b254c, 0x2a09c7b), 00:07:19.537 INFO: Loaded 1 PC tables (358191 PCs): 358191 [0x2a09c80,0x2f80f70), 00:07:19.537 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:19.537 INFO: A corpus is not provided, starting from an empty corpus 00:07:19.537 [2024-07-15 20:57:46.730860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.537 [2024-07-15 20:57:46.730901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.537 #2 INITED cov: 11864 ft: 11863 corp: 1/1b exec/s: 0 rss: 69Mb 00:07:19.537 [2024-07-15 20:57:46.771093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.537 [2024-07-15 20:57:46.771122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.537 [2024-07-15 20:57:46.771245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.537 [2024-07-15 20:57:46.771265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.537 #3 NEW cov: 11994 ft: 13133 corp: 2/3b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 CopyPart- 00:07:19.795 [2024-07-15 20:57:46.831537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.795 [2024-07-15 20:57:46.831567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.796 [2024-07-15 20:57:46.831681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.796 [2024-07-15 20:57:46.831701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.796 [2024-07-15 20:57:46.831812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.796 [2024-07-15 20:57:46.831832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.796 #4 NEW cov: 12000 ft: 13558 corp: 3/6b lim: 5 exec/s: 0 rss: 70Mb L: 3/3 MS: 1 CrossOver- 00:07:19.796 [2024-07-15 20:57:46.881669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.796 [2024-07-15 20:57:46.881695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.796 [2024-07-15 20:57:46.881831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.796 [2024-07-15 20:57:46.881848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.796 [2024-07-15 20:57:46.881970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.796 [2024-07-15 20:57:46.881987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.796 #5 NEW cov: 12085 ft: 13833 corp: 4/9b lim: 5 exec/s: 0 rss: 70Mb L: 3/3 MS: 1 CMP- DE: "\377\007"- 00:07:19.796 [2024-07-15 20:57:46.922035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.796 [2024-07-15 20:57:46.922062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.796 [2024-07-15 20:57:46.922179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.796 [2024-07-15 20:57:46.922197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.796 [2024-07-15 20:57:46.922319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.796 [2024-07-15 20:57:46.922337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.796 #6 NEW cov: 12085 ft: 14008 corp: 5/12b lim: 5 exec/s: 0 rss: 70Mb L: 3/3 MS: 1 PersAutoDict- DE: "\377\007"- 00:07:19.796 [2024-07-15 20:57:46.972198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.796 [2024-07-15 20:57:46.972225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.796 [2024-07-15 20:57:46.972345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.796 [2024-07-15 20:57:46.972362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.796 [2024-07-15 20:57:46.972479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.796 [2024-07-15 20:57:46.972499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.796 [2024-07-15 20:57:46.972606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.796 [2024-07-15 20:57:46.972624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:19.796 #7 NEW cov: 12085 ft: 14360 corp: 6/16b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 InsertByte- 00:07:19.796 [2024-07-15 20:57:47.022308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.796 [2024-07-15 20:57:47.022335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.796 [2024-07-15 20:57:47.022458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.796 [2024-07-15 20:57:47.022476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.796 [2024-07-15 20:57:47.022610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.796 [2024-07-15 20:57:47.022628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.796 [2024-07-15 20:57:47.022749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.796 [2024-07-15 20:57:47.022766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:19.796 #8 NEW cov: 12085 ft: 14410 corp: 7/20b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 CopyPart- 00:07:19.796 [2024-07-15 20:57:47.072006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.796 [2024-07-15 20:57:47.072032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.796 [2024-07-15 20:57:47.072159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.796 [2024-07-15 20:57:47.072176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.055 #9 NEW cov: 12085 ft: 14484 corp: 8/22b lim: 5 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 EraseBytes- 00:07:20.055 [2024-07-15 20:57:47.122385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.055 [2024-07-15 20:57:47.122413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.055 [2024-07-15 20:57:47.122540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.055 [2024-07-15 20:57:47.122559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.055 [2024-07-15 20:57:47.122686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.055 [2024-07-15 20:57:47.122704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.055 #10 NEW cov: 12085 ft: 14508 corp: 9/25b lim: 5 exec/s: 0 rss: 70Mb L: 3/4 MS: 1 ChangeByte- 00:07:20.055 [2024-07-15 20:57:47.161994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.055 [2024-07-15 20:57:47.162022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.055 #11 NEW cov: 12085 ft: 14577 corp: 10/26b lim: 5 exec/s: 0 rss: 70Mb L: 1/4 MS: 1 ShuffleBytes- 00:07:20.055 [2024-07-15 20:57:47.202382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.055 [2024-07-15 20:57:47.202411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.055 [2024-07-15 20:57:47.202532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.055 [2024-07-15 20:57:47.202552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.055 #12 NEW cov: 12085 ft: 14629 corp: 11/28b lim: 5 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 EraseBytes- 00:07:20.055 [2024-07-15 20:57:47.243020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.055 [2024-07-15 20:57:47.243047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.055 [2024-07-15 20:57:47.243162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.055 [2024-07-15 20:57:47.243179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.055 [2024-07-15 20:57:47.243300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.055 [2024-07-15 20:57:47.243320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.056 [2024-07-15 20:57:47.243440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.056 [2024-07-15 20:57:47.243462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.056 #13 NEW cov: 12085 ft: 14655 corp: 12/32b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 InsertByte- 00:07:20.056 [2024-07-15 20:57:47.282866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.056 [2024-07-15 20:57:47.282892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.056 [2024-07-15 20:57:47.283014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.056 [2024-07-15 20:57:47.283032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.056 [2024-07-15 20:57:47.283148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.056 [2024-07-15 20:57:47.283166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.056 #14 NEW cov: 12085 ft: 14685 corp: 13/35b lim: 5 exec/s: 0 rss: 70Mb L: 3/4 MS: 1 PersAutoDict- DE: "\377\007"- 00:07:20.056 [2024-07-15 20:57:47.322963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.056 [2024-07-15 20:57:47.322992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.056 [2024-07-15 20:57:47.323117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.056 [2024-07-15 20:57:47.323135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.056 [2024-07-15 20:57:47.323253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.056 [2024-07-15 20:57:47.323271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.056 #15 NEW cov: 12085 ft: 14726 corp: 14/38b lim: 5 exec/s: 0 rss: 70Mb L: 3/4 MS: 1 ChangeBit- 00:07:20.315 [2024-07-15 20:57:47.363344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.315 [2024-07-15 20:57:47.363372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.315 [2024-07-15 20:57:47.363494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.315 [2024-07-15 20:57:47.363512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.315 [2024-07-15 20:57:47.363628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.315 [2024-07-15 20:57:47.363644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.315 [2024-07-15 20:57:47.363765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.315 [2024-07-15 20:57:47.363784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.315 #16 NEW cov: 12085 ft: 14750 corp: 15/42b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 ChangeBit- 00:07:20.315 [2024-07-15 20:57:47.402669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.315 [2024-07-15 20:57:47.402695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.315 #17 NEW cov: 12085 ft: 14805 corp: 16/43b lim: 5 exec/s: 0 rss: 70Mb L: 1/4 MS: 1 ChangeByte- 00:07:20.315 [2024-07-15 20:57:47.453110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.315 [2024-07-15 20:57:47.453137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.315 [2024-07-15 20:57:47.453266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.315 [2024-07-15 20:57:47.453284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.315 #18 NEW cov: 12085 ft: 14840 corp: 17/45b lim: 5 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 EraseBytes- 00:07:20.315 [2024-07-15 20:57:47.503226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.315 [2024-07-15 20:57:47.503251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.315 [2024-07-15 20:57:47.503374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.315 [2024-07-15 20:57:47.503393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.315 #19 NEW cov: 12085 ft: 14912 corp: 18/47b lim: 5 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 ChangeByte- 00:07:20.315 [2024-07-15 20:57:47.543104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.315 [2024-07-15 20:57:47.543130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.315 #20 NEW cov: 12085 ft: 14922 corp: 19/48b lim: 5 exec/s: 0 rss: 70Mb L: 1/4 MS: 1 EraseBytes- 00:07:20.315 [2024-07-15 20:57:47.593745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.315 [2024-07-15 20:57:47.593773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.315 [2024-07-15 20:57:47.593895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.315 [2024-07-15 20:57:47.593914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.315 [2024-07-15 20:57:47.594036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.315 [2024-07-15 20:57:47.594055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.832 NEW_FUNC[1/1]: 0x1a7d240 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:20.832 #21 NEW cov: 12108 ft: 14976 corp: 20/51b lim: 5 exec/s: 21 rss: 71Mb L: 3/4 MS: 1 CrossOver- 00:07:20.832 [2024-07-15 20:57:47.914866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.832 [2024-07-15 20:57:47.914902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.832 [2024-07-15 20:57:47.915032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.832 [2024-07-15 20:57:47.915056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.832 [2024-07-15 20:57:47.915177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.832 [2024-07-15 20:57:47.915195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.832 #22 NEW cov: 12108 ft: 14999 corp: 21/54b lim: 5 exec/s: 22 rss: 71Mb L: 3/4 MS: 1 PersAutoDict- DE: "\377\007"- 00:07:20.832 [2024-07-15 20:57:47.984885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.832 [2024-07-15 20:57:47.984916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.832 [2024-07-15 20:57:47.985045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.832 [2024-07-15 20:57:47.985064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.832 [2024-07-15 20:57:47.985185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.832 [2024-07-15 20:57:47.985203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.833 #23 NEW cov: 12108 ft: 15052 corp: 22/57b lim: 5 exec/s: 23 rss: 71Mb L: 3/4 MS: 1 InsertByte- 00:07:20.833 [2024-07-15 20:57:48.035113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.833 [2024-07-15 20:57:48.035143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.833 [2024-07-15 20:57:48.035278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.833 [2024-07-15 20:57:48.035297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.833 [2024-07-15 20:57:48.035425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.833 [2024-07-15 20:57:48.035447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.833 #24 NEW cov: 12108 ft: 15053 corp: 23/60b lim: 5 exec/s: 24 rss: 71Mb L: 3/4 MS: 1 InsertByte- 00:07:20.833 [2024-07-15 20:57:48.085263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.833 [2024-07-15 20:57:48.085291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.833 [2024-07-15 20:57:48.085424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.833 [2024-07-15 20:57:48.085446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.833 [2024-07-15 20:57:48.085590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.833 [2024-07-15 20:57:48.085609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.833 #25 NEW cov: 12108 ft: 15089 corp: 24/63b lim: 5 exec/s: 25 rss: 71Mb L: 3/4 MS: 1 ShuffleBytes- 00:07:21.092 [2024-07-15 20:57:48.135436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.092 [2024-07-15 20:57:48.135473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.092 [2024-07-15 20:57:48.135611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.092 [2024-07-15 20:57:48.135629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.092 [2024-07-15 20:57:48.135748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.092 [2024-07-15 20:57:48.135767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.092 #26 NEW cov: 12108 ft: 15169 corp: 25/66b lim: 5 exec/s: 26 rss: 71Mb L: 3/4 MS: 1 PersAutoDict- DE: "\377\007"- 00:07:21.092 [2024-07-15 20:57:48.195880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.092 [2024-07-15 20:57:48.195909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.092 [2024-07-15 20:57:48.196033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.092 [2024-07-15 20:57:48.196052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.092 [2024-07-15 20:57:48.196187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.092 [2024-07-15 20:57:48.196204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.092 [2024-07-15 20:57:48.196327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.092 [2024-07-15 20:57:48.196347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.092 #27 NEW cov: 12108 ft: 15181 corp: 26/70b lim: 5 exec/s: 27 rss: 72Mb L: 4/4 MS: 1 ChangeByte- 00:07:21.092 [2024-07-15 20:57:48.265810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.092 [2024-07-15 20:57:48.265840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.092 [2024-07-15 20:57:48.265959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.092 [2024-07-15 20:57:48.265979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.092 [2024-07-15 20:57:48.266099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.092 [2024-07-15 20:57:48.266118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.092 #28 NEW cov: 12108 ft: 15207 corp: 27/73b lim: 5 exec/s: 28 rss: 72Mb L: 3/4 MS: 1 EraseBytes- 00:07:21.092 [2024-07-15 20:57:48.315398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.092 [2024-07-15 20:57:48.315427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.092 #29 NEW cov: 12108 ft: 15267 corp: 28/74b lim: 5 exec/s: 29 rss: 72Mb L: 1/4 MS: 1 EraseBytes- 00:07:21.092 [2024-07-15 20:57:48.376377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.092 [2024-07-15 20:57:48.376405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.092 [2024-07-15 20:57:48.376526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.092 [2024-07-15 20:57:48.376545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.092 [2024-07-15 20:57:48.376662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.092 [2024-07-15 20:57:48.376681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.092 [2024-07-15 20:57:48.376802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.092 [2024-07-15 20:57:48.376819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.352 #30 NEW cov: 12108 ft: 15270 corp: 29/78b lim: 5 exec/s: 30 rss: 72Mb L: 4/4 MS: 1 InsertByte- 00:07:21.352 [2024-07-15 20:57:48.426316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.352 [2024-07-15 20:57:48.426344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.352 [2024-07-15 20:57:48.426467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.352 [2024-07-15 20:57:48.426486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.352 [2024-07-15 20:57:48.426618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.352 [2024-07-15 20:57:48.426636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.352 #31 NEW cov: 12108 ft: 15284 corp: 30/81b lim: 5 exec/s: 31 rss: 72Mb L: 3/4 MS: 1 CopyPart- 00:07:21.352 [2024-07-15 20:57:48.486402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.352 [2024-07-15 20:57:48.486429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.352 [2024-07-15 20:57:48.486549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.352 [2024-07-15 20:57:48.486568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.352 [2024-07-15 20:57:48.486688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.352 [2024-07-15 20:57:48.486706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.352 #32 NEW cov: 12108 ft: 15307 corp: 31/84b lim: 5 exec/s: 32 rss: 72Mb L: 3/4 MS: 1 CopyPart- 00:07:21.352 [2024-07-15 20:57:48.547148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.352 [2024-07-15 20:57:48.547179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.352 [2024-07-15 20:57:48.547307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.352 [2024-07-15 20:57:48.547326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.352 [2024-07-15 20:57:48.547440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.352 [2024-07-15 20:57:48.547461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.352 [2024-07-15 20:57:48.547584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.352 [2024-07-15 20:57:48.547603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.352 [2024-07-15 20:57:48.547720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.352 [2024-07-15 20:57:48.547738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:21.352 #33 NEW cov: 12108 ft: 15377 corp: 32/89b lim: 5 exec/s: 33 rss: 72Mb L: 5/5 MS: 1 PersAutoDict- DE: "\377\007"- 00:07:21.352 [2024-07-15 20:57:48.606174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.352 [2024-07-15 20:57:48.606202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.352 #34 NEW cov: 12108 ft: 15392 corp: 33/90b lim: 5 exec/s: 34 rss: 72Mb L: 1/5 MS: 1 ChangeByte- 00:07:21.612 [2024-07-15 20:57:48.656995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.612 [2024-07-15 20:57:48.657023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.612 [2024-07-15 20:57:48.657144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.612 [2024-07-15 20:57:48.657162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.612 [2024-07-15 20:57:48.657285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.612 [2024-07-15 20:57:48.657302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.612 #35 NEW cov: 12108 ft: 15395 corp: 34/93b lim: 5 exec/s: 35 rss: 72Mb L: 3/5 MS: 1 ShuffleBytes- 00:07:21.612 [2024-07-15 20:57:48.706819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.612 [2024-07-15 20:57:48.706846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.612 [2024-07-15 20:57:48.706974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.612 [2024-07-15 20:57:48.706992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.612 #36 NEW cov: 12108 ft: 15409 corp: 35/95b lim: 5 exec/s: 18 rss: 72Mb L: 2/5 MS: 1 EraseBytes- 00:07:21.612 #36 DONE cov: 12108 ft: 15409 corp: 35/95b lim: 5 exec/s: 18 rss: 72Mb 00:07:21.612 ###### Recommended dictionary. ###### 00:07:21.612 "\377\007" # Uses: 5 00:07:21.612 ###### End of recommended dictionary. ###### 00:07:21.612 Done 36 runs in 2 second(s) 00:07:21.612 20:57:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:07:21.612 20:57:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:21.612 20:57:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:21.612 20:57:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:07:21.612 20:57:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:07:21.612 20:57:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:21.612 20:57:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:21.612 20:57:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:21.612 20:57:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:07:21.612 20:57:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:21.612 20:57:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:21.612 20:57:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:07:21.612 20:57:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:07:21.612 20:57:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:21.612 20:57:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:07:21.612 20:57:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:21.612 20:57:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:21.612 20:57:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:21.612 20:57:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:07:21.871 [2024-07-15 20:57:48.907499] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:21.871 [2024-07-15 20:57:48.907568] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785123 ] 00:07:21.871 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.871 [2024-07-15 20:57:49.085954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.871 [2024-07-15 20:57:49.150626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.130 [2024-07-15 20:57:49.209520] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.130 [2024-07-15 20:57:49.225781] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:07:22.130 INFO: Running with entropic power schedule (0xFF, 100). 00:07:22.130 INFO: Seed: 1840563520 00:07:22.130 INFO: Loaded 1 modules (358191 inline 8-bit counters): 358191 [0x29b254c, 0x2a09c7b), 00:07:22.130 INFO: Loaded 1 PC tables (358191 PCs): 358191 [0x2a09c80,0x2f80f70), 00:07:22.130 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:22.130 INFO: A corpus is not provided, starting from an empty corpus 00:07:22.130 #2 INITED exec/s: 0 rss: 63Mb 00:07:22.130 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:22.130 This may also happen if the target rejected all inputs we tried so far 00:07:22.130 [2024-07-15 20:57:49.274332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0aa0a0a0 cdw11:a0a0a0a0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.130 [2024-07-15 20:57:49.274370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.131 [2024-07-15 20:57:49.274404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.131 [2024-07-15 20:57:49.274419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.390 NEW_FUNC[1/696]: 0x490cf0 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:07:22.390 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:22.390 #3 NEW cov: 11887 ft: 11881 corp: 2/22b lim: 40 exec/s: 0 rss: 70Mb L: 21/21 MS: 1 InsertRepeatedBytes- 00:07:22.390 [2024-07-15 20:57:49.615275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:59595959 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.390 [2024-07-15 20:57:49.615316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.390 [2024-07-15 20:57:49.615350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:59595959 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.390 [2024-07-15 20:57:49.615366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.390 [2024-07-15 20:57:49.615411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:59595959 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.390 [2024-07-15 20:57:49.615426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.390 [2024-07-15 20:57:49.615463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:59595959 cdw11:5959590a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.390 [2024-07-15 20:57:49.615478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.390 #5 NEW cov: 12017 ft: 12978 corp: 3/54b lim: 40 exec/s: 0 rss: 70Mb L: 32/32 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:22.390 [2024-07-15 20:57:49.675335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:59595959 cdw11:5959595c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.390 [2024-07-15 20:57:49.675367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.390 [2024-07-15 20:57:49.675403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:59595959 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.390 [2024-07-15 20:57:49.675419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.390 [2024-07-15 20:57:49.675459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:59595959 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.390 [2024-07-15 20:57:49.675475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.390 [2024-07-15 20:57:49.675505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:59595959 cdw11:5959590a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.390 [2024-07-15 20:57:49.675520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.649 #16 NEW cov: 12023 ft: 13204 corp: 4/86b lim: 40 exec/s: 0 rss: 70Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:22.649 [2024-07-15 20:57:49.755519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:59595959 cdw11:5959595c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.649 [2024-07-15 20:57:49.755549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.649 [2024-07-15 20:57:49.755584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:59595959 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.649 [2024-07-15 20:57:49.755600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.649 [2024-07-15 20:57:49.755630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:59595959 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.649 [2024-07-15 20:57:49.755645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.649 [2024-07-15 20:57:49.755675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:59595959 cdw11:5959590a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.649 [2024-07-15 20:57:49.755691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.649 #17 NEW cov: 12108 ft: 13403 corp: 5/118b lim: 40 exec/s: 0 rss: 70Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:22.649 [2024-07-15 20:57:49.835641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0aa0a0a0 cdw11:a050a0a0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.649 [2024-07-15 20:57:49.835672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.649 [2024-07-15 20:57:49.835707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.649 [2024-07-15 20:57:49.835723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.649 #18 NEW cov: 12108 ft: 13489 corp: 6/139b lim: 40 exec/s: 0 rss: 70Mb L: 21/32 MS: 1 ChangeByte- 00:07:22.649 [2024-07-15 20:57:49.915966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:59595959 cdw11:59593259 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.650 [2024-07-15 20:57:49.915995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.650 [2024-07-15 20:57:49.916028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:59595959 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.650 [2024-07-15 20:57:49.916043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.650 [2024-07-15 20:57:49.916072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:59595959 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.650 [2024-07-15 20:57:49.916086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.650 [2024-07-15 20:57:49.916113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:59595959 cdw11:5959590a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.650 [2024-07-15 20:57:49.916127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.909 #19 NEW cov: 12108 ft: 13575 corp: 7/171b lim: 40 exec/s: 0 rss: 71Mb L: 32/32 MS: 1 ChangeByte- 00:07:22.909 [2024-07-15 20:57:49.966053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:59595959 cdw11:5959595c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.909 [2024-07-15 20:57:49.966082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.909 [2024-07-15 20:57:49.966119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:59595959 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.909 [2024-07-15 20:57:49.966135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.909 [2024-07-15 20:57:49.966163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:59595961 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.909 [2024-07-15 20:57:49.966178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.909 [2024-07-15 20:57:49.966206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:59595959 cdw11:5959590a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.909 [2024-07-15 20:57:49.966220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.909 #30 NEW cov: 12108 ft: 13649 corp: 8/203b lim: 40 exec/s: 0 rss: 71Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:22.909 [2024-07-15 20:57:50.016115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1effffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.909 [2024-07-15 20:57:50.016145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.909 [2024-07-15 20:57:50.016179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.909 [2024-07-15 20:57:50.016196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.909 #34 NEW cov: 12108 ft: 13732 corp: 9/221b lim: 40 exec/s: 0 rss: 71Mb L: 18/32 MS: 4 ChangeBit-ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:07:22.909 [2024-07-15 20:57:50.066299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:59525959 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.909 [2024-07-15 20:57:50.066329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.909 [2024-07-15 20:57:50.066362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:59595959 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.909 [2024-07-15 20:57:50.066377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.909 [2024-07-15 20:57:50.066406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:59595959 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.909 [2024-07-15 20:57:50.066420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.909 [2024-07-15 20:57:50.066454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:59595959 cdw11:5959590a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.909 [2024-07-15 20:57:50.066485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.909 #35 NEW cov: 12108 ft: 13759 corp: 10/253b lim: 40 exec/s: 0 rss: 71Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:22.909 [2024-07-15 20:57:50.116425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:59595959 cdw11:5959595c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.909 [2024-07-15 20:57:50.116460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.909 [2024-07-15 20:57:50.116510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:59595959 cdw11:59595956 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.909 [2024-07-15 20:57:50.116532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.909 [2024-07-15 20:57:50.116563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:56565656 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.909 [2024-07-15 20:57:50.116578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.909 [2024-07-15 20:57:50.116608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:59595959 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.909 [2024-07-15 20:57:50.116623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.910 NEW_FUNC[1/1]: 0x1a7d240 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:22.910 #36 NEW cov: 12125 ft: 13804 corp: 11/290b lim: 40 exec/s: 0 rss: 71Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:07:22.910 [2024-07-15 20:57:50.196724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:59595959 cdw11:59593259 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.910 [2024-07-15 20:57:50.196754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.910 [2024-07-15 20:57:50.196789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:59595959 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.910 [2024-07-15 20:57:50.196806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.910 [2024-07-15 20:57:50.196836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:59595959 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.910 [2024-07-15 20:57:50.196852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.910 [2024-07-15 20:57:50.196882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:59595959 cdw11:5959590a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.910 [2024-07-15 20:57:50.196898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.169 #37 NEW cov: 12125 ft: 13821 corp: 12/322b lim: 40 exec/s: 37 rss: 71Mb L: 32/37 MS: 1 ShuffleBytes- 00:07:23.169 [2024-07-15 20:57:50.276902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:59595959 cdw11:59593259 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.169 [2024-07-15 20:57:50.276931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.169 [2024-07-15 20:57:50.276964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:59595959 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.169 [2024-07-15 20:57:50.276979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.169 [2024-07-15 20:57:50.277007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:5959a6a6 cdw11:a6a6a6a6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.169 [2024-07-15 20:57:50.277022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.169 [2024-07-15 20:57:50.277050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:a6a65959 cdw11:5959590a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.169 [2024-07-15 20:57:50.277064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.169 #38 NEW cov: 12125 ft: 13880 corp: 13/354b lim: 40 exec/s: 38 rss: 71Mb L: 32/37 MS: 1 ChangeBinInt- 00:07:23.169 [2024-07-15 20:57:50.357128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:59595959 cdw11:5959595c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.169 [2024-07-15 20:57:50.357158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.169 [2024-07-15 20:57:50.357203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:59595959 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.169 [2024-07-15 20:57:50.357219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.169 [2024-07-15 20:57:50.357247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:59595959 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.169 [2024-07-15 20:57:50.357263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.169 [2024-07-15 20:57:50.357291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:59595959 cdw11:5959590a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.169 [2024-07-15 20:57:50.357307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.169 #39 NEW cov: 12125 ft: 13905 corp: 14/386b lim: 40 exec/s: 39 rss: 71Mb L: 32/37 MS: 1 ShuffleBytes- 00:07:23.169 [2024-07-15 20:57:50.407280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:59595959 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.169 [2024-07-15 20:57:50.407311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.169 [2024-07-15 20:57:50.407345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:59595959 cdw11:5959595c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.169 [2024-07-15 20:57:50.407361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.169 [2024-07-15 20:57:50.407391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:59595959 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.169 [2024-07-15 20:57:50.407406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.169 [2024-07-15 20:57:50.407436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:59595961 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.169 [2024-07-15 20:57:50.407458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.169 [2024-07-15 20:57:50.407488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:59595959 cdw11:5959590a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.169 [2024-07-15 20:57:50.407503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:23.428 #40 NEW cov: 12125 ft: 13996 corp: 15/426b lim: 40 exec/s: 40 rss: 71Mb L: 40/40 MS: 1 CrossOver- 00:07:23.428 [2024-07-15 20:57:50.487353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1eff01ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.428 [2024-07-15 20:57:50.487383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.428 [2024-07-15 20:57:50.487418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.428 [2024-07-15 20:57:50.487455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.428 #41 NEW cov: 12125 ft: 14026 corp: 16/444b lim: 40 exec/s: 41 rss: 71Mb L: 18/40 MS: 1 ChangeBinInt- 00:07:23.428 [2024-07-15 20:57:50.567601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1effffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.428 [2024-07-15 20:57:50.567632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.428 [2024-07-15 20:57:50.567667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.428 [2024-07-15 20:57:50.567683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.428 #42 NEW cov: 12125 ft: 14044 corp: 17/463b lim: 40 exec/s: 42 rss: 71Mb L: 19/40 MS: 1 InsertByte- 00:07:23.428 [2024-07-15 20:57:50.617783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:59595959 cdw11:5959595c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.428 [2024-07-15 20:57:50.617814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.428 [2024-07-15 20:57:50.617850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:59595959 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.428 [2024-07-15 20:57:50.617867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.428 [2024-07-15 20:57:50.617899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:59595959 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.428 [2024-07-15 20:57:50.617915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.428 [2024-07-15 20:57:50.617956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:595d5959 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.428 [2024-07-15 20:57:50.617988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.428 #43 NEW cov: 12125 ft: 14056 corp: 18/496b lim: 40 exec/s: 43 rss: 71Mb L: 33/40 MS: 1 InsertByte- 00:07:23.428 [2024-07-15 20:57:50.667895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:595959e1 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.428 [2024-07-15 20:57:50.667923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.428 [2024-07-15 20:57:50.667955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:5c595959 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.428 [2024-07-15 20:57:50.667986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.428 [2024-07-15 20:57:50.668016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:59595959 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.428 [2024-07-15 20:57:50.668031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.428 [2024-07-15 20:57:50.668060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:59595959 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.428 [2024-07-15 20:57:50.668075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.688 #44 NEW cov: 12125 ft: 14075 corp: 19/529b lim: 40 exec/s: 44 rss: 71Mb L: 33/40 MS: 1 InsertByte- 00:07:23.688 [2024-07-15 20:57:50.748157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:59595959 cdw11:5959595c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.688 [2024-07-15 20:57:50.748187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.688 [2024-07-15 20:57:50.748236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:59595959 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.688 [2024-07-15 20:57:50.748252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.688 [2024-07-15 20:57:50.748281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:59595959 cdw11:59592659 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.688 [2024-07-15 20:57:50.748297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.688 [2024-07-15 20:57:50.748326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:59595959 cdw11:5959590a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.688 [2024-07-15 20:57:50.748341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.688 #45 NEW cov: 12125 ft: 14125 corp: 20/561b lim: 40 exec/s: 45 rss: 71Mb L: 32/40 MS: 1 ChangeByte- 00:07:23.688 [2024-07-15 20:57:50.798229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a989898 cdw11:98989898 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.688 [2024-07-15 20:57:50.798258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.688 [2024-07-15 20:57:50.798291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:98989898 cdw11:98989898 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.688 [2024-07-15 20:57:50.798305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.688 [2024-07-15 20:57:50.798334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:98989898 cdw11:98989898 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.688 [2024-07-15 20:57:50.798348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.688 [2024-07-15 20:57:50.798376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:98989898 cdw11:98989898 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.688 [2024-07-15 20:57:50.798390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.688 #46 NEW cov: 12125 ft: 14139 corp: 21/596b lim: 40 exec/s: 46 rss: 71Mb L: 35/40 MS: 1 InsertRepeatedBytes- 00:07:23.688 [2024-07-15 20:57:50.848489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0aa0a0a0 cdw11:a050a0a0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.688 [2024-07-15 20:57:50.848518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.688 [2024-07-15 20:57:50.848552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.688 [2024-07-15 20:57:50.848568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.689 [2024-07-15 20:57:50.848598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:a0a0a052 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.689 [2024-07-15 20:57:50.848614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.689 [2024-07-15 20:57:50.848643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:59595959 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.689 [2024-07-15 20:57:50.848658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.689 [2024-07-15 20:57:50.848692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:59595959 cdw11:5959a0a0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.689 [2024-07-15 20:57:50.848707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:23.689 #47 NEW cov: 12125 ft: 14159 corp: 22/636b lim: 40 exec/s: 47 rss: 72Mb L: 40/40 MS: 1 CrossOver- 00:07:23.689 [2024-07-15 20:57:50.928656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:59595959 cdw11:2159595c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.689 [2024-07-15 20:57:50.928687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.689 [2024-07-15 20:57:50.928720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:59595959 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.689 [2024-07-15 20:57:50.928736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.689 [2024-07-15 20:57:50.928766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:59595961 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.689 [2024-07-15 20:57:50.928781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.689 [2024-07-15 20:57:50.928810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:59595959 cdw11:5959590a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.689 [2024-07-15 20:57:50.928825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.689 #48 NEW cov: 12125 ft: 14230 corp: 23/668b lim: 40 exec/s: 48 rss: 72Mb L: 32/40 MS: 1 ChangeByte- 00:07:23.689 [2024-07-15 20:57:50.978738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:59595959 cdw11:59593259 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.689 [2024-07-15 20:57:50.978768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.689 [2024-07-15 20:57:50.978802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:59595959 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.689 [2024-07-15 20:57:50.978818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.689 [2024-07-15 20:57:50.978847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:59595959 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.689 [2024-07-15 20:57:50.978863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.689 [2024-07-15 20:57:50.978893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:59595959 cdw11:5959540a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.689 [2024-07-15 20:57:50.978908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.948 #49 NEW cov: 12125 ft: 14239 corp: 24/700b lim: 40 exec/s: 49 rss: 72Mb L: 32/40 MS: 1 ChangeBinInt- 00:07:23.948 [2024-07-15 20:57:51.038736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1eff01ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.948 [2024-07-15 20:57:51.038766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.948 [2024-07-15 20:57:51.038798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff7fffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.948 [2024-07-15 20:57:51.038833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.948 #50 NEW cov: 12125 ft: 14275 corp: 25/718b lim: 40 exec/s: 50 rss: 72Mb L: 18/40 MS: 1 ChangeBit- 00:07:23.948 [2024-07-15 20:57:51.118992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1eff01ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.948 [2024-07-15 20:57:51.119020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.948 [2024-07-15 20:57:51.119053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:7e7fffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.948 [2024-07-15 20:57:51.119068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.948 #51 NEW cov: 12132 ft: 14308 corp: 26/736b lim: 40 exec/s: 51 rss: 72Mb L: 18/40 MS: 1 ChangeByte- 00:07:23.948 [2024-07-15 20:57:51.199347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:59595959 cdw11:59593259 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.948 [2024-07-15 20:57:51.199378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.948 [2024-07-15 20:57:51.199412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:59595959 cdw11:59595959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.948 [2024-07-15 20:57:51.199428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.948 [2024-07-15 20:57:51.199466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:5959a6a6 cdw11:a6a6a6a6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.948 [2024-07-15 20:57:51.199482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.948 [2024-07-15 20:57:51.199512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:a6a65959 cdw11:5959580a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.948 [2024-07-15 20:57:51.199527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.208 #52 NEW cov: 12132 ft: 14309 corp: 27/768b lim: 40 exec/s: 26 rss: 72Mb L: 32/40 MS: 1 ChangeBinInt- 00:07:24.208 #52 DONE cov: 12132 ft: 14309 corp: 27/768b lim: 40 exec/s: 26 rss: 72Mb 00:07:24.208 Done 52 runs in 2 second(s) 00:07:24.208 20:57:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:07:24.208 20:57:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:24.208 20:57:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:24.208 20:57:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:07:24.208 20:57:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:07:24.208 20:57:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:24.208 20:57:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:24.208 20:57:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:24.208 20:57:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:07:24.208 20:57:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:24.208 20:57:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:24.208 20:57:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:07:24.208 20:57:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:07:24.208 20:57:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:24.208 20:57:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:07:24.208 20:57:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:24.208 20:57:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:24.208 20:57:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:24.208 20:57:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:07:24.208 [2024-07-15 20:57:51.434364] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:24.208 [2024-07-15 20:57:51.434432] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785635 ] 00:07:24.208 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.467 [2024-07-15 20:57:51.610145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.467 [2024-07-15 20:57:51.676207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.467 [2024-07-15 20:57:51.735393] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:24.467 [2024-07-15 20:57:51.751702] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:07:24.726 INFO: Running with entropic power schedule (0xFF, 100). 00:07:24.726 INFO: Seed: 71593356 00:07:24.726 INFO: Loaded 1 modules (358191 inline 8-bit counters): 358191 [0x29b254c, 0x2a09c7b), 00:07:24.726 INFO: Loaded 1 PC tables (358191 PCs): 358191 [0x2a09c80,0x2f80f70), 00:07:24.726 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:24.726 INFO: A corpus is not provided, starting from an empty corpus 00:07:24.726 #2 INITED exec/s: 0 rss: 63Mb 00:07:24.726 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:24.726 This may also happen if the target rejected all inputs we tried so far 00:07:24.726 [2024-07-15 20:57:51.811151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a898989 cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.726 [2024-07-15 20:57:51.811180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.726 [2024-07-15 20:57:51.811235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:89898989 cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.726 [2024-07-15 20:57:51.811248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.726 [2024-07-15 20:57:51.811299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:89898989 cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.726 [2024-07-15 20:57:51.811313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.986 NEW_FUNC[1/697]: 0x492a60 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:07:24.986 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:24.986 #8 NEW cov: 11899 ft: 11900 corp: 2/31b lim: 40 exec/s: 0 rss: 70Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:07:24.986 [2024-07-15 20:57:52.142163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a898989 cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.986 [2024-07-15 20:57:52.142205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.986 [2024-07-15 20:57:52.142291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:89898989 cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.986 [2024-07-15 20:57:52.142313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.986 [2024-07-15 20:57:52.142385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:89898989 cdw11:8989890a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.986 [2024-07-15 20:57:52.142404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.986 #9 NEW cov: 12029 ft: 12551 corp: 3/57b lim: 40 exec/s: 0 rss: 70Mb L: 26/30 MS: 1 CrossOver- 00:07:24.986 [2024-07-15 20:57:52.201964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a898989 cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.986 [2024-07-15 20:57:52.201988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.986 [2024-07-15 20:57:52.202047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:89898989 cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.986 [2024-07-15 20:57:52.202061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.986 #10 NEW cov: 12035 ft: 13092 corp: 4/79b lim: 40 exec/s: 0 rss: 70Mb L: 22/30 MS: 1 EraseBytes- 00:07:24.986 [2024-07-15 20:57:52.242267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a898989 cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.986 [2024-07-15 20:57:52.242291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.986 [2024-07-15 20:57:52.242349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:89898989 cdw11:898989a9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.986 [2024-07-15 20:57:52.242363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.986 [2024-07-15 20:57:52.242418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:89898989 cdw11:8989890a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.986 [2024-07-15 20:57:52.242432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.986 #11 NEW cov: 12120 ft: 13329 corp: 5/105b lim: 40 exec/s: 0 rss: 70Mb L: 26/30 MS: 1 ChangeBit- 00:07:25.245 [2024-07-15 20:57:52.292220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:a1a1a1a1 cdw11:a1a1a1a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.245 [2024-07-15 20:57:52.292244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.245 [2024-07-15 20:57:52.292302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:a1a1a1a1 cdw11:a1a1a1a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.245 [2024-07-15 20:57:52.292315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.245 #15 NEW cov: 12120 ft: 13406 corp: 6/125b lim: 40 exec/s: 0 rss: 70Mb L: 20/30 MS: 4 ChangeByte-ShuffleBytes-ChangeBinInt-InsertRepeatedBytes- 00:07:25.245 [2024-07-15 20:57:52.332842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:a1a1a152 cdw11:52525252 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.245 [2024-07-15 20:57:52.332866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.245 [2024-07-15 20:57:52.332924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:52525252 cdw11:52525252 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.245 [2024-07-15 20:57:52.332941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.245 [2024-07-15 20:57:52.332996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:52525252 cdw11:52525252 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.245 [2024-07-15 20:57:52.333010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.245 [2024-07-15 20:57:52.333065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:52525252 cdw11:52525252 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.245 [2024-07-15 20:57:52.333078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.245 [2024-07-15 20:57:52.333132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:52525252 cdw11:52a1a1a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.245 [2024-07-15 20:57:52.333146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:25.245 #17 NEW cov: 12120 ft: 13769 corp: 7/165b lim: 40 exec/s: 0 rss: 71Mb L: 40/40 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:25.245 [2024-07-15 20:57:52.382477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.245 [2024-07-15 20:57:52.382502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.245 [2024-07-15 20:57:52.382559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.245 [2024-07-15 20:57:52.382572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.245 #18 NEW cov: 12120 ft: 13819 corp: 8/187b lim: 40 exec/s: 0 rss: 71Mb L: 22/40 MS: 1 InsertRepeatedBytes- 00:07:25.245 [2024-07-15 20:57:52.422579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.245 [2024-07-15 20:57:52.422603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.245 [2024-07-15 20:57:52.422662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.245 [2024-07-15 20:57:52.422676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.245 #19 NEW cov: 12120 ft: 13833 corp: 9/210b lim: 40 exec/s: 0 rss: 71Mb L: 23/40 MS: 1 CopyPart- 00:07:25.245 [2024-07-15 20:57:52.472853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a898989 cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.245 [2024-07-15 20:57:52.472878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.245 [2024-07-15 20:57:52.472933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:09898989 cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.245 [2024-07-15 20:57:52.472948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.245 [2024-07-15 20:57:52.473003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:89898989 cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.245 [2024-07-15 20:57:52.473017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.246 #20 NEW cov: 12120 ft: 13854 corp: 10/240b lim: 40 exec/s: 0 rss: 71Mb L: 30/40 MS: 1 ChangeBit- 00:07:25.246 [2024-07-15 20:57:52.512835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.246 [2024-07-15 20:57:52.512863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.246 [2024-07-15 20:57:52.512920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.246 [2024-07-15 20:57:52.512934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.505 #21 NEW cov: 12120 ft: 13916 corp: 11/263b lim: 40 exec/s: 0 rss: 71Mb L: 23/40 MS: 1 ShuffleBytes- 00:07:25.505 [2024-07-15 20:57:52.563116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a898989 cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.505 [2024-07-15 20:57:52.563141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.505 [2024-07-15 20:57:52.563198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:09898989 cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.505 [2024-07-15 20:57:52.563212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.505 [2024-07-15 20:57:52.563267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:89898989 cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.505 [2024-07-15 20:57:52.563280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.505 #22 NEW cov: 12120 ft: 13942 corp: 12/291b lim: 40 exec/s: 0 rss: 71Mb L: 28/40 MS: 1 EraseBytes- 00:07:25.505 [2024-07-15 20:57:52.613112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff40ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.505 [2024-07-15 20:57:52.613137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.505 [2024-07-15 20:57:52.613195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.505 [2024-07-15 20:57:52.613209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.505 #23 NEW cov: 12120 ft: 13962 corp: 13/314b lim: 40 exec/s: 0 rss: 71Mb L: 23/40 MS: 1 ChangeByte- 00:07:25.505 [2024-07-15 20:57:52.663729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:a1a1a152 cdw11:52525252 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.505 [2024-07-15 20:57:52.663754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.505 [2024-07-15 20:57:52.663810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:52525252 cdw11:52525252 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.505 [2024-07-15 20:57:52.663824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.505 [2024-07-15 20:57:52.663878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:52525252 cdw11:52525252 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.505 [2024-07-15 20:57:52.663892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.505 [2024-07-15 20:57:52.663945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:52525252 cdw11:52525252 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.505 [2024-07-15 20:57:52.663959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.505 [2024-07-15 20:57:52.664014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:52525252 cdw11:52a1a152 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.505 [2024-07-15 20:57:52.664031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:25.505 NEW_FUNC[1/1]: 0x1a7d240 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:25.505 #24 NEW cov: 12143 ft: 14011 corp: 14/354b lim: 40 exec/s: 0 rss: 71Mb L: 40/40 MS: 1 CopyPart- 00:07:25.506 [2024-07-15 20:57:52.713408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.506 [2024-07-15 20:57:52.713432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.506 [2024-07-15 20:57:52.713516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.506 [2024-07-15 20:57:52.713530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.506 #25 NEW cov: 12143 ft: 14022 corp: 15/377b lim: 40 exec/s: 0 rss: 71Mb L: 23/40 MS: 1 ShuffleBytes- 00:07:25.506 [2024-07-15 20:57:52.753938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:a1a1a152 cdw11:52525252 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.506 [2024-07-15 20:57:52.753962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.506 [2024-07-15 20:57:52.754022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:52525252 cdw11:52525252 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.506 [2024-07-15 20:57:52.754038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.506 [2024-07-15 20:57:52.754094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:52525252 cdw11:52525252 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.506 [2024-07-15 20:57:52.754108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.506 [2024-07-15 20:57:52.754165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:52525252 cdw11:52525252 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.506 [2024-07-15 20:57:52.754179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.506 [2024-07-15 20:57:52.754235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:52525252 cdw11:52a1a1a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.506 [2024-07-15 20:57:52.754249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:25.506 #31 NEW cov: 12143 ft: 14037 corp: 16/417b lim: 40 exec/s: 0 rss: 71Mb L: 40/40 MS: 1 CopyPart- 00:07:25.506 [2024-07-15 20:57:52.793828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:98989898 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.506 [2024-07-15 20:57:52.793854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.506 [2024-07-15 20:57:52.793913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:989898ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.506 [2024-07-15 20:57:52.793927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.506 [2024-07-15 20:57:52.793982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.506 [2024-07-15 20:57:52.793996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.766 #32 NEW cov: 12143 ft: 14077 corp: 17/447b lim: 40 exec/s: 32 rss: 71Mb L: 30/40 MS: 1 InsertRepeatedBytes- 00:07:25.766 [2024-07-15 20:57:52.833746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.766 [2024-07-15 20:57:52.833771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.766 [2024-07-15 20:57:52.833829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.766 [2024-07-15 20:57:52.833843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.766 #33 NEW cov: 12143 ft: 14087 corp: 18/470b lim: 40 exec/s: 33 rss: 71Mb L: 23/40 MS: 1 ChangeBinInt- 00:07:25.766 [2024-07-15 20:57:52.883896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:a1a1a1a1 cdw11:a1a1a1a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.766 [2024-07-15 20:57:52.883921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.766 [2024-07-15 20:57:52.883980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:a1a1a1a1 cdw11:a1a1a1a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.766 [2024-07-15 20:57:52.883994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.766 #34 NEW cov: 12143 ft: 14103 corp: 19/490b lim: 40 exec/s: 34 rss: 71Mb L: 20/40 MS: 1 ShuffleBytes- 00:07:25.766 [2024-07-15 20:57:52.924164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a898989 cdw11:89898976 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.766 [2024-07-15 20:57:52.924189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.766 [2024-07-15 20:57:52.924249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:89898989 cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.766 [2024-07-15 20:57:52.924263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.766 [2024-07-15 20:57:52.924321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:89898989 cdw11:8989890a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.766 [2024-07-15 20:57:52.924334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.766 #35 NEW cov: 12143 ft: 14168 corp: 20/516b lim: 40 exec/s: 35 rss: 71Mb L: 26/40 MS: 1 ChangeByte- 00:07:25.766 [2024-07-15 20:57:52.964268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.766 [2024-07-15 20:57:52.964292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.766 [2024-07-15 20:57:52.964366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.766 [2024-07-15 20:57:52.964380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.766 [2024-07-15 20:57:52.964438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.766 [2024-07-15 20:57:52.964455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.766 #36 NEW cov: 12143 ft: 14210 corp: 21/543b lim: 40 exec/s: 36 rss: 71Mb L: 27/40 MS: 1 InsertRepeatedBytes- 00:07:25.766 [2024-07-15 20:57:53.004383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a898989 cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.766 [2024-07-15 20:57:53.004411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.766 [2024-07-15 20:57:53.004471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:89898989 cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.766 [2024-07-15 20:57:53.004485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.766 [2024-07-15 20:57:53.004557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:89898989 cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.766 [2024-07-15 20:57:53.004572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.766 #37 NEW cov: 12143 ft: 14252 corp: 22/573b lim: 40 exec/s: 37 rss: 71Mb L: 30/40 MS: 1 CopyPart- 00:07:25.766 [2024-07-15 20:57:53.044355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:a1a1a1a1 cdw11:a1a1a1a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.766 [2024-07-15 20:57:53.044379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.766 [2024-07-15 20:57:53.044436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:a1a1a1a1 cdw11:a1a1a1a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.766 [2024-07-15 20:57:53.044454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.027 #38 NEW cov: 12143 ft: 14271 corp: 23/594b lim: 40 exec/s: 38 rss: 71Mb L: 21/40 MS: 1 InsertByte- 00:07:26.027 [2024-07-15 20:57:53.084275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0ba1a1a1 cdw11:a1a1a1a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.027 [2024-07-15 20:57:53.084299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.027 #42 NEW cov: 12143 ft: 15053 corp: 24/603b lim: 40 exec/s: 42 rss: 71Mb L: 9/40 MS: 4 ShuffleBytes-ChangeBit-ShuffleBytes-CrossOver- 00:07:26.027 [2024-07-15 20:57:53.125024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.027 [2024-07-15 20:57:53.125048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.027 [2024-07-15 20:57:53.125104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.027 [2024-07-15 20:57:53.125118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.027 [2024-07-15 20:57:53.125175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:42424242 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.027 [2024-07-15 20:57:53.125188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.027 [2024-07-15 20:57:53.125242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:42424242 cdw11:42424242 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.027 [2024-07-15 20:57:53.125256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.027 [2024-07-15 20:57:53.125314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:42ffffff cdw11:ffffff0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.027 [2024-07-15 20:57:53.125328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:26.027 #43 NEW cov: 12143 ft: 15097 corp: 25/643b lim: 40 exec/s: 43 rss: 71Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:07:26.027 [2024-07-15 20:57:53.174886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a898989 cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.027 [2024-07-15 20:57:53.174910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.027 [2024-07-15 20:57:53.174969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:89898989 cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.027 [2024-07-15 20:57:53.174984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.027 [2024-07-15 20:57:53.175039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ff898989 cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.027 [2024-07-15 20:57:53.175053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.027 #44 NEW cov: 12143 ft: 15143 corp: 26/673b lim: 40 exec/s: 44 rss: 72Mb L: 30/40 MS: 1 ChangeByte- 00:07:26.027 [2024-07-15 20:57:53.225001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:98989898 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.027 [2024-07-15 20:57:53.225025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.027 [2024-07-15 20:57:53.225101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:989898ff cdw11:ffffff13 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.027 [2024-07-15 20:57:53.225115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.027 [2024-07-15 20:57:53.225173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:db9360d5 cdw11:442b00ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.027 [2024-07-15 20:57:53.225187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.027 #45 NEW cov: 12143 ft: 15187 corp: 27/703b lim: 40 exec/s: 45 rss: 72Mb L: 30/40 MS: 1 CMP- DE: "\023\333\223`\325D+\000"- 00:07:26.027 [2024-07-15 20:57:53.275292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a898989 cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.027 [2024-07-15 20:57:53.275315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.027 [2024-07-15 20:57:53.275374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:89898989 cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.027 [2024-07-15 20:57:53.275387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.027 [2024-07-15 20:57:53.275449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:89898989 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.027 [2024-07-15 20:57:53.275463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.027 [2024-07-15 20:57:53.275538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:89898989 cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.027 [2024-07-15 20:57:53.275552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.027 #46 NEW cov: 12143 ft: 15203 corp: 28/737b lim: 40 exec/s: 46 rss: 72Mb L: 34/40 MS: 1 InsertRepeatedBytes- 00:07:26.027 [2024-07-15 20:57:53.315278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.027 [2024-07-15 20:57:53.315305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.027 [2024-07-15 20:57:53.315364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.027 [2024-07-15 20:57:53.315378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.027 [2024-07-15 20:57:53.315433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:22ffff0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.027 [2024-07-15 20:57:53.315452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.286 #47 NEW cov: 12143 ft: 15239 corp: 29/761b lim: 40 exec/s: 47 rss: 72Mb L: 24/40 MS: 1 InsertByte- 00:07:26.286 [2024-07-15 20:57:53.355058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a13db93 cdw11:60d5442b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.286 [2024-07-15 20:57:53.355081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.286 #50 NEW cov: 12143 ft: 15253 corp: 30/770b lim: 40 exec/s: 50 rss: 72Mb L: 9/40 MS: 3 ShuffleBytes-CopyPart-PersAutoDict- DE: "\023\333\223`\325D+\000"- 00:07:26.286 [2024-07-15 20:57:53.395350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:a1a1a1a1 cdw11:a1a1a1a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.286 [2024-07-15 20:57:53.395374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.286 [2024-07-15 20:57:53.395432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:a1a1a1a1 cdw11:a1a1a1a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.286 [2024-07-15 20:57:53.395452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.286 #51 NEW cov: 12143 ft: 15270 corp: 31/790b lim: 40 exec/s: 51 rss: 72Mb L: 20/40 MS: 1 ChangeBit- 00:07:26.286 [2024-07-15 20:57:53.445637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff40ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.286 [2024-07-15 20:57:53.445662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.286 [2024-07-15 20:57:53.445719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.286 [2024-07-15 20:57:53.445734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.286 [2024-07-15 20:57:53.445788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.286 [2024-07-15 20:57:53.445802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.286 #52 NEW cov: 12143 ft: 15284 corp: 32/816b lim: 40 exec/s: 52 rss: 72Mb L: 26/40 MS: 1 CopyPart- 00:07:26.286 [2024-07-15 20:57:53.495791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a898989 cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.286 [2024-07-15 20:57:53.495814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.286 [2024-07-15 20:57:53.495889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:89898989 cdw11:86898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.286 [2024-07-15 20:57:53.495903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.287 [2024-07-15 20:57:53.495960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:89898989 cdw11:8989890a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.287 [2024-07-15 20:57:53.495976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.287 #53 NEW cov: 12143 ft: 15308 corp: 33/842b lim: 40 exec/s: 53 rss: 72Mb L: 26/40 MS: 1 ChangeBinInt- 00:07:26.287 [2024-07-15 20:57:53.535874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a898989 cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.287 [2024-07-15 20:57:53.535898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.287 [2024-07-15 20:57:53.535961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:89898989 cdw11:13db9360 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.287 [2024-07-15 20:57:53.535974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.287 [2024-07-15 20:57:53.536033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d5442b00 cdw11:8989890a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.287 [2024-07-15 20:57:53.536047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.287 #54 NEW cov: 12143 ft: 15330 corp: 34/868b lim: 40 exec/s: 54 rss: 72Mb L: 26/40 MS: 1 PersAutoDict- DE: "\023\333\223`\325D+\000"- 00:07:26.545 [2024-07-15 20:57:53.586338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a898989 cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.545 [2024-07-15 20:57:53.586363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.545 [2024-07-15 20:57:53.586423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.545 [2024-07-15 20:57:53.586437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.545 [2024-07-15 20:57:53.586513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.545 [2024-07-15 20:57:53.586526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.545 [2024-07-15 20:57:53.586584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:42424242 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.545 [2024-07-15 20:57:53.586598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.545 [2024-07-15 20:57:53.586656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:42424242 cdw11:09894242 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.545 [2024-07-15 20:57:53.586670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:26.545 #55 NEW cov: 12143 ft: 15338 corp: 35/908b lim: 40 exec/s: 55 rss: 72Mb L: 40/40 MS: 1 CrossOver- 00:07:26.545 [2024-07-15 20:57:53.626155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:98989898 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.545 [2024-07-15 20:57:53.626179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.545 [2024-07-15 20:57:53.626237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:989898ff cdw11:ffffff13 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.545 [2024-07-15 20:57:53.626251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.545 [2024-07-15 20:57:53.626311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:db93602c cdw11:442b00ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.545 [2024-07-15 20:57:53.626325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.545 #56 NEW cov: 12143 ft: 15349 corp: 36/938b lim: 40 exec/s: 56 rss: 72Mb L: 30/40 MS: 1 ChangeByte- 00:07:26.545 [2024-07-15 20:57:53.676287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:98989898 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.545 [2024-07-15 20:57:53.676312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.545 [2024-07-15 20:57:53.676370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:98ffffff cdw11:ff13db93 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.545 [2024-07-15 20:57:53.676384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.545 [2024-07-15 20:57:53.676438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:602c442b cdw11:00ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.545 [2024-07-15 20:57:53.676457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.545 #57 NEW cov: 12143 ft: 15356 corp: 37/968b lim: 40 exec/s: 57 rss: 72Mb L: 30/40 MS: 1 CopyPart- 00:07:26.545 [2024-07-15 20:57:53.726724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:a1a1a152 cdw11:52135252 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.545 [2024-07-15 20:57:53.726747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.545 [2024-07-15 20:57:53.726806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:52525252 cdw11:52525252 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.545 [2024-07-15 20:57:53.726820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.545 [2024-07-15 20:57:53.726875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:52525252 cdw11:52525252 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.545 [2024-07-15 20:57:53.726888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.545 [2024-07-15 20:57:53.726943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:52525252 cdw11:52525252 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.545 [2024-07-15 20:57:53.726957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.545 [2024-07-15 20:57:53.727013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:52525252 cdw11:52a1a152 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.546 [2024-07-15 20:57:53.727027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:26.546 #58 NEW cov: 12143 ft: 15365 corp: 38/1008b lim: 40 exec/s: 58 rss: 72Mb L: 40/40 MS: 1 ChangeByte- 00:07:26.546 [2024-07-15 20:57:53.776235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d13db93 cdw11:60d5442b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.546 [2024-07-15 20:57:53.776258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.546 #62 NEW cov: 12143 ft: 15390 corp: 39/1018b lim: 40 exec/s: 31 rss: 72Mb L: 10/40 MS: 4 ChangeByte-ShuffleBytes-InsertByte-PersAutoDict- DE: "\023\333\223`\325D+\000"- 00:07:26.546 #62 DONE cov: 12143 ft: 15390 corp: 39/1018b lim: 40 exec/s: 31 rss: 72Mb 00:07:26.546 ###### Recommended dictionary. ###### 00:07:26.546 "\023\333\223`\325D+\000" # Uses: 3 00:07:26.546 ###### End of recommended dictionary. ###### 00:07:26.546 Done 62 runs in 2 second(s) 00:07:26.804 20:57:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:07:26.804 20:57:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:26.804 20:57:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:26.804 20:57:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:07:26.804 20:57:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:07:26.804 20:57:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:26.804 20:57:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:26.804 20:57:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:26.804 20:57:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:07:26.804 20:57:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:26.804 20:57:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:26.804 20:57:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:07:26.804 20:57:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:07:26.804 20:57:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:26.804 20:57:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:07:26.804 20:57:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:26.804 20:57:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:26.804 20:57:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:26.804 20:57:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:07:26.804 [2024-07-15 20:57:53.961300] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:26.804 [2024-07-15 20:57:53.961368] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785943 ] 00:07:26.804 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.062 [2024-07-15 20:57:54.143753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.062 [2024-07-15 20:57:54.210038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.062 [2024-07-15 20:57:54.269185] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.062 [2024-07-15 20:57:54.285503] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:07:27.062 INFO: Running with entropic power schedule (0xFF, 100). 00:07:27.062 INFO: Seed: 2606611669 00:07:27.062 INFO: Loaded 1 modules (358191 inline 8-bit counters): 358191 [0x29b254c, 0x2a09c7b), 00:07:27.062 INFO: Loaded 1 PC tables (358191 PCs): 358191 [0x2a09c80,0x2f80f70), 00:07:27.063 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:27.063 INFO: A corpus is not provided, starting from an empty corpus 00:07:27.063 #2 INITED exec/s: 0 rss: 64Mb 00:07:27.063 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:27.063 This may also happen if the target rejected all inputs we tried so far 00:07:27.063 [2024-07-15 20:57:54.352693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.063 [2024-07-15 20:57:54.352728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.063 [2024-07-15 20:57:54.352873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.063 [2024-07-15 20:57:54.352893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.063 [2024-07-15 20:57:54.353017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.063 [2024-07-15 20:57:54.353038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.063 [2024-07-15 20:57:54.353171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.063 [2024-07-15 20:57:54.353189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.581 NEW_FUNC[1/697]: 0x4947d0 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:07:27.581 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:27.581 #8 NEW cov: 11893 ft: 11898 corp: 2/40b lim: 40 exec/s: 0 rss: 70Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:07:27.581 [2024-07-15 20:57:54.692893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.581 [2024-07-15 20:57:54.692930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.581 [2024-07-15 20:57:54.693048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.581 [2024-07-15 20:57:54.693067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.581 [2024-07-15 20:57:54.693179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.581 [2024-07-15 20:57:54.693199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.581 #9 NEW cov: 12027 ft: 12739 corp: 3/68b lim: 40 exec/s: 0 rss: 70Mb L: 28/39 MS: 1 EraseBytes- 00:07:27.581 [2024-07-15 20:57:54.743276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.581 [2024-07-15 20:57:54.743306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.581 [2024-07-15 20:57:54.743419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.581 [2024-07-15 20:57:54.743438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.581 [2024-07-15 20:57:54.743572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.581 [2024-07-15 20:57:54.743592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.581 [2024-07-15 20:57:54.743709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.581 [2024-07-15 20:57:54.743725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.581 #10 NEW cov: 12033 ft: 13003 corp: 4/104b lim: 40 exec/s: 0 rss: 70Mb L: 36/39 MS: 1 EraseBytes- 00:07:27.581 [2024-07-15 20:57:54.783369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.581 [2024-07-15 20:57:54.783397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.581 [2024-07-15 20:57:54.783531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.581 [2024-07-15 20:57:54.783549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.581 [2024-07-15 20:57:54.783675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.581 [2024-07-15 20:57:54.783693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.581 [2024-07-15 20:57:54.783817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.581 [2024-07-15 20:57:54.783835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.581 #11 NEW cov: 12118 ft: 13313 corp: 5/143b lim: 40 exec/s: 0 rss: 70Mb L: 39/39 MS: 1 ShuffleBytes- 00:07:27.581 [2024-07-15 20:57:54.823171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:babababa cdw11:babababa SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.581 [2024-07-15 20:57:54.823199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.581 [2024-07-15 20:57:54.823319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:babababa cdw11:babababa SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.581 [2024-07-15 20:57:54.823337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.581 [2024-07-15 20:57:54.823460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:babababa cdw11:babababa SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.581 [2024-07-15 20:57:54.823478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.581 #12 NEW cov: 12118 ft: 13392 corp: 6/174b lim: 40 exec/s: 0 rss: 70Mb L: 31/39 MS: 1 InsertRepeatedBytes- 00:07:27.581 [2024-07-15 20:57:54.863097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.581 [2024-07-15 20:57:54.863125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.581 [2024-07-15 20:57:54.863264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.581 [2024-07-15 20:57:54.863284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.842 #13 NEW cov: 12118 ft: 13681 corp: 7/191b lim: 40 exec/s: 0 rss: 70Mb L: 17/39 MS: 1 EraseBytes- 00:07:27.842 [2024-07-15 20:57:54.913685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.842 [2024-07-15 20:57:54.913712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.842 [2024-07-15 20:57:54.913839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.842 [2024-07-15 20:57:54.913857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.842 [2024-07-15 20:57:54.913984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.842 [2024-07-15 20:57:54.914001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.842 [2024-07-15 20:57:54.914127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:99000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.842 [2024-07-15 20:57:54.914146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.842 #14 NEW cov: 12118 ft: 13754 corp: 8/230b lim: 40 exec/s: 0 rss: 71Mb L: 39/39 MS: 1 ChangeByte- 00:07:27.842 [2024-07-15 20:57:54.963873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.842 [2024-07-15 20:57:54.963901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.842 [2024-07-15 20:57:54.964028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.842 [2024-07-15 20:57:54.964045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.842 [2024-07-15 20:57:54.964175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.842 [2024-07-15 20:57:54.964195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.842 [2024-07-15 20:57:54.964324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.842 [2024-07-15 20:57:54.964341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.842 #15 NEW cov: 12118 ft: 13775 corp: 9/266b lim: 40 exec/s: 0 rss: 71Mb L: 36/39 MS: 1 ShuffleBytes- 00:07:27.842 [2024-07-15 20:57:55.014028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.842 [2024-07-15 20:57:55.014054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.842 [2024-07-15 20:57:55.014186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.842 [2024-07-15 20:57:55.014206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.842 [2024-07-15 20:57:55.014325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.842 [2024-07-15 20:57:55.014344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.842 [2024-07-15 20:57:55.014466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:99000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.842 [2024-07-15 20:57:55.014485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.842 #16 NEW cov: 12118 ft: 13877 corp: 10/305b lim: 40 exec/s: 0 rss: 71Mb L: 39/39 MS: 1 ChangeBit- 00:07:27.842 [2024-07-15 20:57:55.064241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.842 [2024-07-15 20:57:55.064268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.842 [2024-07-15 20:57:55.064389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.842 [2024-07-15 20:57:55.064410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.842 [2024-07-15 20:57:55.064539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.842 [2024-07-15 20:57:55.064558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.842 [2024-07-15 20:57:55.064683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.842 [2024-07-15 20:57:55.064702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.842 #17 NEW cov: 12118 ft: 13948 corp: 11/344b lim: 40 exec/s: 0 rss: 71Mb L: 39/39 MS: 1 CopyPart- 00:07:27.842 [2024-07-15 20:57:55.104110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.842 [2024-07-15 20:57:55.104138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.842 [2024-07-15 20:57:55.104267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.842 [2024-07-15 20:57:55.104285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.842 [2024-07-15 20:57:55.104404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.842 [2024-07-15 20:57:55.104423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.842 [2024-07-15 20:57:55.104547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.842 [2024-07-15 20:57:55.104567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.842 #18 NEW cov: 12118 ft: 13990 corp: 12/383b lim: 40 exec/s: 0 rss: 71Mb L: 39/39 MS: 1 CopyPart- 00:07:28.124 [2024-07-15 20:57:55.144204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.124 [2024-07-15 20:57:55.144232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.124 [2024-07-15 20:57:55.144367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.124 [2024-07-15 20:57:55.144388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.124 [2024-07-15 20:57:55.144518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.124 [2024-07-15 20:57:55.144538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.124 [2024-07-15 20:57:55.144664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.124 [2024-07-15 20:57:55.144682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.124 #19 NEW cov: 12118 ft: 14011 corp: 13/422b lim: 40 exec/s: 0 rss: 71Mb L: 39/39 MS: 1 ChangeBit- 00:07:28.124 [2024-07-15 20:57:55.204883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.124 [2024-07-15 20:57:55.204910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.124 [2024-07-15 20:57:55.205035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.124 [2024-07-15 20:57:55.205054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.124 [2024-07-15 20:57:55.205179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.124 [2024-07-15 20:57:55.205197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.124 [2024-07-15 20:57:55.205314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.124 [2024-07-15 20:57:55.205333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.124 [2024-07-15 20:57:55.205446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.124 [2024-07-15 20:57:55.205464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:28.124 NEW_FUNC[1/1]: 0x1a7d240 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:28.124 #20 NEW cov: 12141 ft: 14182 corp: 14/462b lim: 40 exec/s: 0 rss: 71Mb L: 40/40 MS: 1 CopyPart- 00:07:28.124 [2024-07-15 20:57:55.244711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000024 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.124 [2024-07-15 20:57:55.244739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.124 [2024-07-15 20:57:55.244863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.124 [2024-07-15 20:57:55.244881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.124 [2024-07-15 20:57:55.245007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.124 [2024-07-15 20:57:55.245026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.124 [2024-07-15 20:57:55.245154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.124 [2024-07-15 20:57:55.245174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.124 #21 NEW cov: 12141 ft: 14217 corp: 15/498b lim: 40 exec/s: 0 rss: 71Mb L: 36/40 MS: 1 ChangeBinInt- 00:07:28.124 [2024-07-15 20:57:55.284210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.124 [2024-07-15 20:57:55.284235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.124 [2024-07-15 20:57:55.284361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.124 [2024-07-15 20:57:55.284381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.124 [2024-07-15 20:57:55.284509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.124 [2024-07-15 20:57:55.284528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.124 #22 NEW cov: 12141 ft: 14253 corp: 16/523b lim: 40 exec/s: 0 rss: 71Mb L: 25/40 MS: 1 EraseBytes- 00:07:28.124 [2024-07-15 20:57:55.335048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.124 [2024-07-15 20:57:55.335073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.124 [2024-07-15 20:57:55.335198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.124 [2024-07-15 20:57:55.335217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.124 [2024-07-15 20:57:55.335342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.124 [2024-07-15 20:57:55.335362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.124 [2024-07-15 20:57:55.335479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.124 [2024-07-15 20:57:55.335497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.124 [2024-07-15 20:57:55.335626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.124 [2024-07-15 20:57:55.335646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:28.124 #23 NEW cov: 12141 ft: 14264 corp: 17/563b lim: 40 exec/s: 23 rss: 71Mb L: 40/40 MS: 1 InsertByte- 00:07:28.124 [2024-07-15 20:57:55.374776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:babababa cdw11:babababa SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.125 [2024-07-15 20:57:55.374804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.125 [2024-07-15 20:57:55.374926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:babababa cdw11:babababa SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.125 [2024-07-15 20:57:55.374943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.125 [2024-07-15 20:57:55.375065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:babababa cdw11:babababa SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.125 [2024-07-15 20:57:55.375085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.384 #24 NEW cov: 12141 ft: 14292 corp: 18/594b lim: 40 exec/s: 24 rss: 71Mb L: 31/40 MS: 1 ShuffleBytes- 00:07:28.384 [2024-07-15 20:57:55.435217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.384 [2024-07-15 20:57:55.435245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.384 [2024-07-15 20:57:55.435365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.384 [2024-07-15 20:57:55.435386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.384 [2024-07-15 20:57:55.435510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.384 [2024-07-15 20:57:55.435529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.384 [2024-07-15 20:57:55.435643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00006100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.384 [2024-07-15 20:57:55.435660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.384 #25 NEW cov: 12141 ft: 14412 corp: 19/630b lim: 40 exec/s: 25 rss: 71Mb L: 36/40 MS: 1 ChangeByte- 00:07:28.384 [2024-07-15 20:57:55.485417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:babababa cdw11:babababa SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.384 [2024-07-15 20:57:55.485451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.384 [2024-07-15 20:57:55.485582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:babababa cdw11:babababa SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.384 [2024-07-15 20:57:55.485603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.384 [2024-07-15 20:57:55.485736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:babababa cdw11:babababa SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.384 [2024-07-15 20:57:55.485755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.384 [2024-07-15 20:57:55.485879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:bababa00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.384 [2024-07-15 20:57:55.485898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.384 #26 NEW cov: 12141 ft: 14426 corp: 20/669b lim: 40 exec/s: 26 rss: 71Mb L: 39/40 MS: 1 InsertRepeatedBytes- 00:07:28.384 [2024-07-15 20:57:55.525465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000024 cdw11:0000004d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.384 [2024-07-15 20:57:55.525493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.384 [2024-07-15 20:57:55.525626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.384 [2024-07-15 20:57:55.525643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.384 [2024-07-15 20:57:55.525769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.384 [2024-07-15 20:57:55.525787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.384 [2024-07-15 20:57:55.525908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.384 [2024-07-15 20:57:55.525925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.384 #27 NEW cov: 12141 ft: 14438 corp: 21/706b lim: 40 exec/s: 27 rss: 71Mb L: 37/40 MS: 1 InsertByte- 00:07:28.384 [2024-07-15 20:57:55.575644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a210000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.384 [2024-07-15 20:57:55.575677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.384 [2024-07-15 20:57:55.575803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.384 [2024-07-15 20:57:55.575824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.385 [2024-07-15 20:57:55.575943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.385 [2024-07-15 20:57:55.575962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.385 [2024-07-15 20:57:55.576080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:99000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.385 [2024-07-15 20:57:55.576098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.385 #28 NEW cov: 12141 ft: 14455 corp: 22/745b lim: 40 exec/s: 28 rss: 71Mb L: 39/40 MS: 1 ChangeByte- 00:07:28.385 [2024-07-15 20:57:55.616039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.385 [2024-07-15 20:57:55.616065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.385 [2024-07-15 20:57:55.616185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.385 [2024-07-15 20:57:55.616204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.385 [2024-07-15 20:57:55.616335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.385 [2024-07-15 20:57:55.616355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.385 [2024-07-15 20:57:55.616471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.385 [2024-07-15 20:57:55.616492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.385 [2024-07-15 20:57:55.616606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.385 [2024-07-15 20:57:55.616623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:28.385 #29 NEW cov: 12141 ft: 14481 corp: 23/785b lim: 40 exec/s: 29 rss: 72Mb L: 40/40 MS: 1 ShuffleBytes- 00:07:28.385 [2024-07-15 20:57:55.665879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.385 [2024-07-15 20:57:55.665905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.385 [2024-07-15 20:57:55.666031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:003c0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.385 [2024-07-15 20:57:55.666050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.385 [2024-07-15 20:57:55.666169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.385 [2024-07-15 20:57:55.666187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.385 [2024-07-15 20:57:55.666316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.385 [2024-07-15 20:57:55.666335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.645 #30 NEW cov: 12141 ft: 14485 corp: 24/824b lim: 40 exec/s: 30 rss: 72Mb L: 39/40 MS: 1 ChangeByte- 00:07:28.645 [2024-07-15 20:57:55.705978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.645 [2024-07-15 20:57:55.706006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.645 [2024-07-15 20:57:55.706126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.645 [2024-07-15 20:57:55.706145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.645 [2024-07-15 20:57:55.706264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.645 [2024-07-15 20:57:55.706282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.645 [2024-07-15 20:57:55.706400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.645 [2024-07-15 20:57:55.706417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.645 #31 NEW cov: 12141 ft: 14503 corp: 25/863b lim: 40 exec/s: 31 rss: 72Mb L: 39/40 MS: 1 ChangeBinInt- 00:07:28.645 [2024-07-15 20:57:55.745878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:babababa cdw11:babababa SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.645 [2024-07-15 20:57:55.745907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.645 [2024-07-15 20:57:55.746026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:babababa cdw11:babababa SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.645 [2024-07-15 20:57:55.746042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.645 [2024-07-15 20:57:55.746165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:baba31ba cdw11:babababa SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.645 [2024-07-15 20:57:55.746183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.645 #32 NEW cov: 12141 ft: 14513 corp: 26/894b lim: 40 exec/s: 32 rss: 72Mb L: 31/40 MS: 1 ChangeByte- 00:07:28.645 [2024-07-15 20:57:55.796255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a030000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.645 [2024-07-15 20:57:55.796283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.645 [2024-07-15 20:57:55.796406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:003c0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.645 [2024-07-15 20:57:55.796425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.645 [2024-07-15 20:57:55.796548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.645 [2024-07-15 20:57:55.796567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.645 [2024-07-15 20:57:55.796691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.645 [2024-07-15 20:57:55.796709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.645 #33 NEW cov: 12141 ft: 14549 corp: 27/933b lim: 40 exec/s: 33 rss: 72Mb L: 39/40 MS: 1 ChangeBinInt- 00:07:28.645 [2024-07-15 20:57:55.846406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a210000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.645 [2024-07-15 20:57:55.846433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.645 [2024-07-15 20:57:55.846579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.645 [2024-07-15 20:57:55.846601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.645 [2024-07-15 20:57:55.846728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.645 [2024-07-15 20:57:55.846746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.645 [2024-07-15 20:57:55.846875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:99000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.645 [2024-07-15 20:57:55.846893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.645 #34 NEW cov: 12141 ft: 14585 corp: 28/972b lim: 40 exec/s: 34 rss: 72Mb L: 39/40 MS: 1 ShuffleBytes- 00:07:28.645 [2024-07-15 20:57:55.896866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.645 [2024-07-15 20:57:55.896896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.645 [2024-07-15 20:57:55.897026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.645 [2024-07-15 20:57:55.897045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.645 [2024-07-15 20:57:55.897172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.645 [2024-07-15 20:57:55.897191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.645 [2024-07-15 20:57:55.897316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.645 [2024-07-15 20:57:55.897334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.645 [2024-07-15 20:57:55.897458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.645 [2024-07-15 20:57:55.897475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:28.645 #35 NEW cov: 12141 ft: 14597 corp: 29/1012b lim: 40 exec/s: 35 rss: 72Mb L: 40/40 MS: 1 ShuffleBytes- 00:07:28.905 [2024-07-15 20:57:55.946556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:babababa cdw11:babababa SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.905 [2024-07-15 20:57:55.946587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.905 [2024-07-15 20:57:55.946713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:babababa cdw11:babababa SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.905 [2024-07-15 20:57:55.946733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.905 [2024-07-15 20:57:55.946857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:babababa cdw11:babababa SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.905 [2024-07-15 20:57:55.946876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.905 #36 NEW cov: 12141 ft: 14625 corp: 30/1043b lim: 40 exec/s: 36 rss: 72Mb L: 31/40 MS: 1 ShuffleBytes- 00:07:28.905 [2024-07-15 20:57:55.987130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.905 [2024-07-15 20:57:55.987158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.905 [2024-07-15 20:57:55.987277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.906 [2024-07-15 20:57:55.987293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.906 [2024-07-15 20:57:55.987417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.906 [2024-07-15 20:57:55.987433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.906 [2024-07-15 20:57:55.987556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff05 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.906 [2024-07-15 20:57:55.987574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.906 [2024-07-15 20:57:55.987695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.906 [2024-07-15 20:57:55.987716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:28.906 #37 NEW cov: 12141 ft: 14661 corp: 31/1083b lim: 40 exec/s: 37 rss: 72Mb L: 40/40 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\005"- 00:07:28.906 [2024-07-15 20:57:56.027017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.906 [2024-07-15 20:57:56.027045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.906 [2024-07-15 20:57:56.027175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.906 [2024-07-15 20:57:56.027195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.906 [2024-07-15 20:57:56.027319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.906 [2024-07-15 20:57:56.027338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.906 [2024-07-15 20:57:56.027466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000061 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.906 [2024-07-15 20:57:56.027485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.906 #38 NEW cov: 12141 ft: 14681 corp: 32/1120b lim: 40 exec/s: 38 rss: 72Mb L: 37/40 MS: 1 CopyPart- 00:07:28.906 [2024-07-15 20:57:56.077142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.906 [2024-07-15 20:57:56.077171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.906 [2024-07-15 20:57:56.077308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.906 [2024-07-15 20:57:56.077326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.906 [2024-07-15 20:57:56.077446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.906 [2024-07-15 20:57:56.077465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.906 [2024-07-15 20:57:56.077587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.906 [2024-07-15 20:57:56.077606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.906 #39 NEW cov: 12141 ft: 14742 corp: 33/1159b lim: 40 exec/s: 39 rss: 72Mb L: 39/40 MS: 1 ShuffleBytes- 00:07:28.906 [2024-07-15 20:57:56.127209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.906 [2024-07-15 20:57:56.127234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.906 [2024-07-15 20:57:56.127355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.906 [2024-07-15 20:57:56.127372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.906 [2024-07-15 20:57:56.127501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.906 [2024-07-15 20:57:56.127520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.906 [2024-07-15 20:57:56.127649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.906 [2024-07-15 20:57:56.127668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.906 #40 NEW cov: 12141 ft: 14746 corp: 34/1198b lim: 40 exec/s: 40 rss: 72Mb L: 39/40 MS: 1 ShuffleBytes- 00:07:28.906 [2024-07-15 20:57:56.167339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000024 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.906 [2024-07-15 20:57:56.167366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.906 [2024-07-15 20:57:56.167499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.906 [2024-07-15 20:57:56.167517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.906 [2024-07-15 20:57:56.167637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:fa000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.906 [2024-07-15 20:57:56.167656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.906 [2024-07-15 20:57:56.167784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.906 [2024-07-15 20:57:56.167802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.906 #41 NEW cov: 12141 ft: 14763 corp: 35/1235b lim: 40 exec/s: 41 rss: 72Mb L: 37/40 MS: 1 InsertByte- 00:07:29.166 [2024-07-15 20:57:56.207504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000024 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.166 [2024-07-15 20:57:56.207532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.166 [2024-07-15 20:57:56.207655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.166 [2024-07-15 20:57:56.207673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.166 [2024-07-15 20:57:56.207794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00240000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.166 [2024-07-15 20:57:56.207812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.166 [2024-07-15 20:57:56.207944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.166 [2024-07-15 20:57:56.207962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.166 #42 NEW cov: 12141 ft: 14774 corp: 36/1271b lim: 40 exec/s: 42 rss: 72Mb L: 36/40 MS: 1 ChangeBinInt- 00:07:29.166 [2024-07-15 20:57:56.247537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:babababa cdw11:babababa SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.166 [2024-07-15 20:57:56.247563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.166 [2024-07-15 20:57:56.247680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:babababa cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.166 [2024-07-15 20:57:56.247698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.166 [2024-07-15 20:57:56.247821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffff05 cdw11:babababa SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.166 [2024-07-15 20:57:56.247841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.166 [2024-07-15 20:57:56.247961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:babababa cdw11:babababa SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.166 [2024-07-15 20:57:56.247978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.166 #43 NEW cov: 12141 ft: 14778 corp: 37/1310b lim: 40 exec/s: 43 rss: 72Mb L: 39/40 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\005"- 00:07:29.166 [2024-07-15 20:57:56.287750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.166 [2024-07-15 20:57:56.287777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.166 [2024-07-15 20:57:56.287904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.166 [2024-07-15 20:57:56.287927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.166 [2024-07-15 20:57:56.288050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.166 [2024-07-15 20:57:56.288069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.166 [2024-07-15 20:57:56.288194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000800 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.166 [2024-07-15 20:57:56.288213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.166 #44 NEW cov: 12141 ft: 14784 corp: 38/1346b lim: 40 exec/s: 44 rss: 72Mb L: 36/40 MS: 1 ChangeBit- 00:07:29.166 [2024-07-15 20:57:56.327706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.166 [2024-07-15 20:57:56.327734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.166 [2024-07-15 20:57:56.327875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.166 [2024-07-15 20:57:56.327892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.166 [2024-07-15 20:57:56.328018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.166 [2024-07-15 20:57:56.328035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.166 [2024-07-15 20:57:56.328158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.166 [2024-07-15 20:57:56.328177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.166 #45 NEW cov: 12141 ft: 14788 corp: 39/1385b lim: 40 exec/s: 22 rss: 72Mb L: 39/40 MS: 1 ChangeBit- 00:07:29.166 #45 DONE cov: 12141 ft: 14788 corp: 39/1385b lim: 40 exec/s: 22 rss: 72Mb 00:07:29.166 ###### Recommended dictionary. ###### 00:07:29.166 "\377\377\377\377\377\377\377\005" # Uses: 1 00:07:29.166 ###### End of recommended dictionary. ###### 00:07:29.166 Done 45 runs in 2 second(s) 00:07:29.427 20:57:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:07:29.427 20:57:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:29.427 20:57:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:29.427 20:57:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:07:29.427 20:57:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:07:29.427 20:57:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:29.427 20:57:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:29.427 20:57:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:29.427 20:57:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:07:29.427 20:57:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:29.427 20:57:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:29.427 20:57:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:07:29.427 20:57:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:07:29.427 20:57:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:29.427 20:57:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:07:29.427 20:57:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:29.427 20:57:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:29.427 20:57:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:29.427 20:57:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:07:29.427 [2024-07-15 20:57:56.516523] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:29.427 [2024-07-15 20:57:56.516612] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786476 ] 00:07:29.427 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.427 [2024-07-15 20:57:56.696519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.687 [2024-07-15 20:57:56.761898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.687 [2024-07-15 20:57:56.820725] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:29.687 [2024-07-15 20:57:56.836989] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:07:29.687 INFO: Running with entropic power schedule (0xFF, 100). 00:07:29.687 INFO: Seed: 861620265 00:07:29.687 INFO: Loaded 1 modules (358191 inline 8-bit counters): 358191 [0x29b254c, 0x2a09c7b), 00:07:29.687 INFO: Loaded 1 PC tables (358191 PCs): 358191 [0x2a09c80,0x2f80f70), 00:07:29.687 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:29.687 INFO: A corpus is not provided, starting from an empty corpus 00:07:29.687 #2 INITED exec/s: 0 rss: 64Mb 00:07:29.687 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:29.687 This may also happen if the target rejected all inputs we tried so far 00:07:29.687 [2024-07-15 20:57:56.886129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a270a27 cdw11:57575757 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.687 [2024-07-15 20:57:56.886158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.687 [2024-07-15 20:57:56.886234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:57575757 cdw11:57575757 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.687 [2024-07-15 20:57:56.886249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.687 [2024-07-15 20:57:56.886307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:57575757 cdw11:57575757 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.687 [2024-07-15 20:57:56.886320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.946 NEW_FUNC[1/696]: 0x496390 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:07:29.946 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:29.946 #27 NEW cov: 11885 ft: 11886 corp: 2/27b lim: 40 exec/s: 0 rss: 70Mb L: 26/26 MS: 5 CopyPart-InsertByte-ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:07:29.946 [2024-07-15 20:57:57.206761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.946 [2024-07-15 20:57:57.206796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.946 [2024-07-15 20:57:57.206852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.946 [2024-07-15 20:57:57.206866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.946 #32 NEW cov: 12015 ft: 12706 corp: 3/48b lim: 40 exec/s: 0 rss: 70Mb L: 21/26 MS: 5 CrossOver-InsertByte-EraseBytes-CrossOver-InsertRepeatedBytes- 00:07:30.206 [2024-07-15 20:57:57.246883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffc5 cdw11:c5c5c5c5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.206 [2024-07-15 20:57:57.246910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.206 [2024-07-15 20:57:57.246966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.206 [2024-07-15 20:57:57.246979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.206 [2024-07-15 20:57:57.247031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.206 [2024-07-15 20:57:57.247045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.206 #33 NEW cov: 12021 ft: 12908 corp: 4/78b lim: 40 exec/s: 0 rss: 70Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:07:30.206 [2024-07-15 20:57:57.296986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffc5 cdw11:c5c5c5c5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.206 [2024-07-15 20:57:57.297027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.206 [2024-07-15 20:57:57.297083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.206 [2024-07-15 20:57:57.297098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.206 [2024-07-15 20:57:57.297152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.206 [2024-07-15 20:57:57.297166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.206 #34 NEW cov: 12106 ft: 13117 corp: 5/108b lim: 40 exec/s: 0 rss: 70Mb L: 30/30 MS: 1 ShuffleBytes- 00:07:30.206 [2024-07-15 20:57:57.347054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.206 [2024-07-15 20:57:57.347079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.206 [2024-07-15 20:57:57.347135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.206 [2024-07-15 20:57:57.347148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.206 #35 NEW cov: 12106 ft: 13290 corp: 6/129b lim: 40 exec/s: 0 rss: 70Mb L: 21/30 MS: 1 ShuffleBytes- 00:07:30.206 [2024-07-15 20:57:57.387325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffc5 cdw11:c5c5c5c5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.206 [2024-07-15 20:57:57.387351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.206 [2024-07-15 20:57:57.387409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:c52bc5c5 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.206 [2024-07-15 20:57:57.387424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.206 [2024-07-15 20:57:57.387478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.206 [2024-07-15 20:57:57.387492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.206 #36 NEW cov: 12106 ft: 13412 corp: 7/159b lim: 40 exec/s: 0 rss: 70Mb L: 30/30 MS: 1 ChangeByte- 00:07:30.206 [2024-07-15 20:57:57.427432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffc5 cdw11:c5c5c5c5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.206 [2024-07-15 20:57:57.427463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.206 [2024-07-15 20:57:57.427518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.206 [2024-07-15 20:57:57.427533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.206 [2024-07-15 20:57:57.427587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.206 [2024-07-15 20:57:57.427601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.206 #37 NEW cov: 12106 ft: 13463 corp: 8/189b lim: 40 exec/s: 0 rss: 71Mb L: 30/30 MS: 1 ChangeByte- 00:07:30.206 [2024-07-15 20:57:57.477423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffc5 cdw11:c5c5ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.206 [2024-07-15 20:57:57.477454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.206 [2024-07-15 20:57:57.477522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.206 [2024-07-15 20:57:57.477536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.466 #38 NEW cov: 12106 ft: 13558 corp: 9/206b lim: 40 exec/s: 0 rss: 71Mb L: 17/30 MS: 1 EraseBytes- 00:07:30.466 [2024-07-15 20:57:57.517807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffc5 cdw11:c5c5c5c5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.466 [2024-07-15 20:57:57.517834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.466 [2024-07-15 20:57:57.517891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.466 [2024-07-15 20:57:57.517905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.466 [2024-07-15 20:57:57.517958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.466 [2024-07-15 20:57:57.517972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.466 [2024-07-15 20:57:57.518024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffff0aff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.466 [2024-07-15 20:57:57.518038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.466 #39 NEW cov: 12106 ft: 14069 corp: 10/242b lim: 40 exec/s: 0 rss: 71Mb L: 36/36 MS: 1 CrossOver- 00:07:30.466 [2024-07-15 20:57:57.567782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffc5 cdw11:c5c5c5c5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.466 [2024-07-15 20:57:57.567807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.466 [2024-07-15 20:57:57.567863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:c5c5ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.466 [2024-07-15 20:57:57.567876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.466 [2024-07-15 20:57:57.567930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.466 [2024-07-15 20:57:57.567944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.466 #40 NEW cov: 12106 ft: 14143 corp: 11/272b lim: 40 exec/s: 0 rss: 71Mb L: 30/36 MS: 1 CopyPart- 00:07:30.466 [2024-07-15 20:57:57.607779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:24ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.466 [2024-07-15 20:57:57.607805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.466 [2024-07-15 20:57:57.607862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.466 [2024-07-15 20:57:57.607876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.466 #41 NEW cov: 12106 ft: 14217 corp: 12/293b lim: 40 exec/s: 0 rss: 71Mb L: 21/36 MS: 1 ChangeByte- 00:07:30.466 [2024-07-15 20:57:57.657800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.466 [2024-07-15 20:57:57.657827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.466 #42 NEW cov: 12106 ft: 14552 corp: 13/307b lim: 40 exec/s: 0 rss: 71Mb L: 14/36 MS: 1 EraseBytes- 00:07:30.466 [2024-07-15 20:57:57.698132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a21270a cdw11:27575757 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.466 [2024-07-15 20:57:57.698156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.466 [2024-07-15 20:57:57.698212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:57575757 cdw11:57575757 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.466 [2024-07-15 20:57:57.698226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.466 [2024-07-15 20:57:57.698280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:57575757 cdw11:57575757 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.466 [2024-07-15 20:57:57.698294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.466 #43 NEW cov: 12106 ft: 14556 corp: 14/334b lim: 40 exec/s: 0 rss: 71Mb L: 27/36 MS: 1 InsertByte- 00:07:30.466 [2024-07-15 20:57:57.748396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffc5 cdw11:c5c5c5c5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.466 [2024-07-15 20:57:57.748421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.466 [2024-07-15 20:57:57.748497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:c5c5c5ff cdw11:ffffc5ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.466 [2024-07-15 20:57:57.748511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.466 [2024-07-15 20:57:57.748569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.466 [2024-07-15 20:57:57.748583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.466 [2024-07-15 20:57:57.748644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffff0aff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.466 [2024-07-15 20:57:57.748662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.726 NEW_FUNC[1/1]: 0x1a7d240 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:30.726 #44 NEW cov: 12129 ft: 14614 corp: 15/370b lim: 40 exec/s: 0 rss: 71Mb L: 36/36 MS: 1 ShuffleBytes- 00:07:30.726 [2024-07-15 20:57:57.798417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a23ffff cdw11:c5c5c5c5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.726 [2024-07-15 20:57:57.798446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.726 [2024-07-15 20:57:57.798505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:c5c52bc5 cdw11:c5ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.726 [2024-07-15 20:57:57.798519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.726 [2024-07-15 20:57:57.798570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.726 [2024-07-15 20:57:57.798584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.726 #45 NEW cov: 12129 ft: 14690 corp: 16/401b lim: 40 exec/s: 0 rss: 71Mb L: 31/36 MS: 1 InsertByte- 00:07:30.726 [2024-07-15 20:57:57.848786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffc5 cdw11:c5c5c5c5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.726 [2024-07-15 20:57:57.848811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.726 [2024-07-15 20:57:57.848868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.726 [2024-07-15 20:57:57.848882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.726 [2024-07-15 20:57:57.848936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffb9b9b9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.726 [2024-07-15 20:57:57.848949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.726 [2024-07-15 20:57:57.849003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:b9ffffff cdw11:ffff0aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.726 [2024-07-15 20:57:57.849017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.726 [2024-07-15 20:57:57.849070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ff96ff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.726 [2024-07-15 20:57:57.849084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:30.726 #46 NEW cov: 12129 ft: 14758 corp: 17/441b lim: 40 exec/s: 46 rss: 71Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:07:30.726 [2024-07-15 20:57:57.888654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffc5 cdw11:c5c5c5c5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.726 [2024-07-15 20:57:57.888678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.726 [2024-07-15 20:57:57.888734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:c52bc5c5 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.726 [2024-07-15 20:57:57.888747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.726 [2024-07-15 20:57:57.888800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff7fffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.726 [2024-07-15 20:57:57.888814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.726 #47 NEW cov: 12129 ft: 14770 corp: 18/471b lim: 40 exec/s: 47 rss: 71Mb L: 30/40 MS: 1 ChangeBit- 00:07:30.726 [2024-07-15 20:57:57.928791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a270a27 cdw11:57570aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.726 [2024-07-15 20:57:57.928815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.726 [2024-07-15 20:57:57.928870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffc5c5c5 cdw11:c5c5c52b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.726 [2024-07-15 20:57:57.928884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.726 [2024-07-15 20:57:57.928937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:c5c55757 cdw11:575757ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.726 [2024-07-15 20:57:57.928951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.726 #48 NEW cov: 12129 ft: 14782 corp: 19/495b lim: 40 exec/s: 48 rss: 71Mb L: 24/40 MS: 1 CrossOver- 00:07:30.726 [2024-07-15 20:57:57.969151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffc5 cdw11:c5c5c5c5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.726 [2024-07-15 20:57:57.969175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.726 [2024-07-15 20:57:57.969229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.726 [2024-07-15 20:57:57.969243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.726 [2024-07-15 20:57:57.969297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffb9b9b9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.726 [2024-07-15 20:57:57.969311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.726 [2024-07-15 20:57:57.969362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:b9ffffff cdw11:ffff0aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.726 [2024-07-15 20:57:57.969376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.726 [2024-07-15 20:57:57.969430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:9696ff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.727 [2024-07-15 20:57:57.969451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:30.727 #49 NEW cov: 12129 ft: 14819 corp: 20/535b lim: 40 exec/s: 49 rss: 71Mb L: 40/40 MS: 1 CopyPart- 00:07:30.986 [2024-07-15 20:57:58.019064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affff85 cdw11:c5c5c5c5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.986 [2024-07-15 20:57:58.019090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.986 [2024-07-15 20:57:58.019144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.986 [2024-07-15 20:57:58.019158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.986 [2024-07-15 20:57:58.019213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.986 [2024-07-15 20:57:58.019227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.986 #50 NEW cov: 12129 ft: 14824 corp: 21/565b lim: 40 exec/s: 50 rss: 71Mb L: 30/40 MS: 1 ChangeBit- 00:07:30.986 [2024-07-15 20:57:58.058938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:570a270a cdw11:27575757 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.986 [2024-07-15 20:57:58.058962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.986 #55 NEW cov: 12129 ft: 14839 corp: 22/579b lim: 40 exec/s: 55 rss: 71Mb L: 14/40 MS: 5 ChangeByte-CrossOver-CopyPart-ShuffleBytes-CrossOver- 00:07:30.986 [2024-07-15 20:57:58.099014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:570a270a cdw11:27575757 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.986 [2024-07-15 20:57:58.099039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.986 #56 NEW cov: 12129 ft: 14858 corp: 23/593b lim: 40 exec/s: 56 rss: 72Mb L: 14/40 MS: 1 ShuffleBytes- 00:07:30.986 [2024-07-15 20:57:58.149661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffc5 cdw11:c5c5c5c5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.987 [2024-07-15 20:57:58.149686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.987 [2024-07-15 20:57:58.149741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.987 [2024-07-15 20:57:58.149756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.987 [2024-07-15 20:57:58.149808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffb9b9b9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.987 [2024-07-15 20:57:58.149822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.987 [2024-07-15 20:57:58.149875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:b9ffc5c5 cdw11:c5c5c5c5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.987 [2024-07-15 20:57:58.149888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.987 [2024-07-15 20:57:58.149943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:c5c5ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.987 [2024-07-15 20:57:58.149957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:30.987 #57 NEW cov: 12129 ft: 14901 corp: 24/633b lim: 40 exec/s: 57 rss: 72Mb L: 40/40 MS: 1 CrossOver- 00:07:30.987 [2024-07-15 20:57:58.189401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:24ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.987 [2024-07-15 20:57:58.189425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.987 [2024-07-15 20:57:58.189482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffde cdw11:6a9242d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.987 [2024-07-15 20:57:58.189497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.987 #58 NEW cov: 12129 ft: 14943 corp: 25/654b lim: 40 exec/s: 58 rss: 72Mb L: 21/40 MS: 1 CMP- DE: "\336j\222B\330D+\000"- 00:07:30.987 [2024-07-15 20:57:58.239925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffc5 cdw11:c5c5c5c5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.987 [2024-07-15 20:57:58.239949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.987 [2024-07-15 20:57:58.240004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.987 [2024-07-15 20:57:58.240017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.987 [2024-07-15 20:57:58.240071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffb9b9b9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.987 [2024-07-15 20:57:58.240085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.987 [2024-07-15 20:57:58.240137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:b9ffffff cdw11:ffff0aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.987 [2024-07-15 20:57:58.240150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.987 [2024-07-15 20:57:58.240203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.987 [2024-07-15 20:57:58.240217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:30.987 #59 NEW cov: 12129 ft: 14963 corp: 26/694b lim: 40 exec/s: 59 rss: 72Mb L: 40/40 MS: 1 CopyPart- 00:07:31.247 [2024-07-15 20:57:58.279800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffc5 cdw11:c5b9b9b9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.247 [2024-07-15 20:57:58.279825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.247 [2024-07-15 20:57:58.279882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:b9ffc5c5 cdw11:c5c5c5c5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.247 [2024-07-15 20:57:58.279913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.247 [2024-07-15 20:57:58.279970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:c5c5ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.247 [2024-07-15 20:57:58.279983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.247 #60 NEW cov: 12129 ft: 15039 corp: 27/718b lim: 40 exec/s: 60 rss: 72Mb L: 24/40 MS: 1 EraseBytes- 00:07:31.247 [2024-07-15 20:57:58.329929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffc5 cdw11:c5c5c5c5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.247 [2024-07-15 20:57:58.329957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.247 [2024-07-15 20:57:58.330013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:c52bc5c5 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.247 [2024-07-15 20:57:58.330027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.247 [2024-07-15 20:57:58.330081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.247 [2024-07-15 20:57:58.330095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.247 #61 NEW cov: 12129 ft: 15041 corp: 28/748b lim: 40 exec/s: 61 rss: 72Mb L: 30/40 MS: 1 ShuffleBytes- 00:07:31.247 [2024-07-15 20:57:58.370278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffc5 cdw11:c5c5c5c5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.247 [2024-07-15 20:57:58.370302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.247 [2024-07-15 20:57:58.370359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:31ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.247 [2024-07-15 20:57:58.370372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.247 [2024-07-15 20:57:58.370426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffb9b9b9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.247 [2024-07-15 20:57:58.370440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.247 [2024-07-15 20:57:58.370525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:b9ffffff cdw11:ffff0aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.247 [2024-07-15 20:57:58.370538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.247 [2024-07-15 20:57:58.370593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.247 [2024-07-15 20:57:58.370606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:31.247 #62 NEW cov: 12129 ft: 15083 corp: 29/788b lim: 40 exec/s: 62 rss: 72Mb L: 40/40 MS: 1 ChangeByte- 00:07:31.247 [2024-07-15 20:57:58.420071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffc5 cdw11:c5c5c5c5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.247 [2024-07-15 20:57:58.420095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.247 [2024-07-15 20:57:58.420151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:31ff0aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.247 [2024-07-15 20:57:58.420165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.247 #63 NEW cov: 12129 ft: 15121 corp: 30/809b lim: 40 exec/s: 63 rss: 72Mb L: 21/40 MS: 1 CrossOver- 00:07:31.247 [2024-07-15 20:57:58.470321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affff85 cdw11:c5c5c5c5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.247 [2024-07-15 20:57:58.470346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.247 [2024-07-15 20:57:58.470402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.247 [2024-07-15 20:57:58.470420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.247 [2024-07-15 20:57:58.470478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff0affff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.247 [2024-07-15 20:57:58.470492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.247 #64 NEW cov: 12129 ft: 15124 corp: 31/835b lim: 40 exec/s: 64 rss: 72Mb L: 26/40 MS: 1 CrossOver- 00:07:31.247 [2024-07-15 20:57:58.520219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.247 [2024-07-15 20:57:58.520244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.507 #65 NEW cov: 12129 ft: 15139 corp: 32/849b lim: 40 exec/s: 65 rss: 72Mb L: 14/40 MS: 1 ShuffleBytes- 00:07:31.507 [2024-07-15 20:57:58.570511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffc5 cdw11:c5c50000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.507 [2024-07-15 20:57:58.570537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.507 [2024-07-15 20:57:58.570595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00f8ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.507 [2024-07-15 20:57:58.570610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.507 #66 NEW cov: 12129 ft: 15160 corp: 33/866b lim: 40 exec/s: 66 rss: 72Mb L: 17/40 MS: 1 ChangeBinInt- 00:07:31.507 [2024-07-15 20:57:58.620736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a21270a cdw11:27575757 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.507 [2024-07-15 20:57:58.620762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.507 [2024-07-15 20:57:58.620816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:57575757 cdw11:57575757 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.507 [2024-07-15 20:57:58.620830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.507 [2024-07-15 20:57:58.620882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:57575757 cdw11:57575757 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.507 [2024-07-15 20:57:58.620895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.507 #67 NEW cov: 12129 ft: 15169 corp: 34/893b lim: 40 exec/s: 67 rss: 72Mb L: 27/40 MS: 1 ShuffleBytes- 00:07:31.507 [2024-07-15 20:57:58.670651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.507 [2024-07-15 20:57:58.670678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.507 #68 NEW cov: 12129 ft: 15187 corp: 35/904b lim: 40 exec/s: 68 rss: 72Mb L: 11/40 MS: 1 EraseBytes- 00:07:31.507 [2024-07-15 20:57:58.710752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff41 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.507 [2024-07-15 20:57:58.710777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.507 #69 NEW cov: 12129 ft: 15200 corp: 36/916b lim: 40 exec/s: 69 rss: 73Mb L: 12/40 MS: 1 InsertByte- 00:07:31.507 [2024-07-15 20:57:58.761146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffc5 cdw11:c5c5c5ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.507 [2024-07-15 20:57:58.761174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.507 [2024-07-15 20:57:58.761232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:c5c5c5ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.507 [2024-07-15 20:57:58.761249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.507 [2024-07-15 20:57:58.761306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.507 [2024-07-15 20:57:58.761319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.507 #70 NEW cov: 12129 ft: 15205 corp: 37/946b lim: 40 exec/s: 70 rss: 73Mb L: 30/40 MS: 1 ShuffleBytes- 00:07:31.767 [2024-07-15 20:57:58.811149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffc5 cdw11:c5ff0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.767 [2024-07-15 20:57:58.811174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.767 [2024-07-15 20:57:58.811226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00f8ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.767 [2024-07-15 20:57:58.811239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.767 #71 NEW cov: 12129 ft: 15225 corp: 38/963b lim: 40 exec/s: 71 rss: 73Mb L: 17/40 MS: 1 CrossOver- 00:07:31.767 [2024-07-15 20:57:58.861193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.767 [2024-07-15 20:57:58.861218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.767 #72 NEW cov: 12129 ft: 15228 corp: 39/974b lim: 40 exec/s: 36 rss: 73Mb L: 11/40 MS: 1 ShuffleBytes- 00:07:31.767 #72 DONE cov: 12129 ft: 15228 corp: 39/974b lim: 40 exec/s: 36 rss: 73Mb 00:07:31.767 ###### Recommended dictionary. ###### 00:07:31.767 "\336j\222B\330D+\000" # Uses: 0 00:07:31.767 ###### End of recommended dictionary. ###### 00:07:31.767 Done 72 runs in 2 second(s) 00:07:31.767 20:57:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:07:31.767 20:57:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:31.767 20:57:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:31.767 20:57:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:07:31.767 20:57:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:07:31.767 20:57:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:31.767 20:57:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:31.767 20:57:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:31.767 20:57:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:07:31.767 20:57:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:31.767 20:57:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:31.767 20:57:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:07:31.767 20:57:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:07:31.767 20:57:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:31.767 20:57:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:07:31.767 20:57:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:31.767 20:57:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:31.767 20:57:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:31.767 20:57:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:07:31.767 [2024-07-15 20:57:59.051389] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:31.767 [2024-07-15 20:57:59.051490] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786934 ] 00:07:32.027 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.027 [2024-07-15 20:57:59.233796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.027 [2024-07-15 20:57:59.299183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.286 [2024-07-15 20:57:59.358604] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:32.286 [2024-07-15 20:57:59.374911] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:07:32.286 INFO: Running with entropic power schedule (0xFF, 100). 00:07:32.286 INFO: Seed: 3399619455 00:07:32.286 INFO: Loaded 1 modules (358191 inline 8-bit counters): 358191 [0x29b254c, 0x2a09c7b), 00:07:32.286 INFO: Loaded 1 PC tables (358191 PCs): 358191 [0x2a09c80,0x2f80f70), 00:07:32.286 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:32.286 INFO: A corpus is not provided, starting from an empty corpus 00:07:32.286 #2 INITED exec/s: 0 rss: 63Mb 00:07:32.286 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:32.286 This may also happen if the target rejected all inputs we tried so far 00:07:32.286 [2024-07-15 20:57:59.424004] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.286 [2024-07-15 20:57:59.424032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.286 [2024-07-15 20:57:59.424088] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.286 [2024-07-15 20:57:59.424102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.546 NEW_FUNC[1/697]: 0x497f50 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:07:32.546 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:32.546 #3 NEW cov: 11878 ft: 11877 corp: 2/20b lim: 35 exec/s: 0 rss: 70Mb L: 19/19 MS: 1 InsertRepeatedBytes- 00:07:32.546 [2024-07-15 20:57:59.754859] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.546 [2024-07-15 20:57:59.754889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.546 [2024-07-15 20:57:59.754948] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.546 [2024-07-15 20:57:59.754961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.546 #9 NEW cov: 12009 ft: 12383 corp: 3/39b lim: 35 exec/s: 0 rss: 70Mb L: 19/19 MS: 1 ChangeBit- 00:07:32.546 [2024-07-15 20:57:59.805043] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.546 [2024-07-15 20:57:59.805071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.546 [2024-07-15 20:57:59.805130] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.546 [2024-07-15 20:57:59.805144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.546 [2024-07-15 20:57:59.805201] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.546 [2024-07-15 20:57:59.805215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.546 #14 NEW cov: 12015 ft: 13003 corp: 4/62b lim: 35 exec/s: 0 rss: 70Mb L: 23/23 MS: 5 ChangeBinInt-CopyPart-ChangeByte-CopyPart-InsertRepeatedBytes- 00:07:32.805 [2024-07-15 20:57:59.845014] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.805 [2024-07-15 20:57:59.845039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.805 [2024-07-15 20:57:59.845101] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.805 [2024-07-15 20:57:59.845115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.805 #15 NEW cov: 12100 ft: 13249 corp: 5/81b lim: 35 exec/s: 0 rss: 70Mb L: 19/23 MS: 1 EraseBytes- 00:07:32.805 [2024-07-15 20:57:59.895163] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.805 [2024-07-15 20:57:59.895187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.805 [2024-07-15 20:57:59.895246] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.805 [2024-07-15 20:57:59.895260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.805 #16 NEW cov: 12100 ft: 13339 corp: 6/100b lim: 35 exec/s: 0 rss: 70Mb L: 19/23 MS: 1 ChangeBinInt- 00:07:32.805 [2024-07-15 20:57:59.945315] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.805 [2024-07-15 20:57:59.945339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.805 [2024-07-15 20:57:59.945398] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.805 [2024-07-15 20:57:59.945414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.805 #17 NEW cov: 12100 ft: 13418 corp: 7/116b lim: 35 exec/s: 0 rss: 70Mb L: 16/23 MS: 1 EraseBytes- 00:07:32.805 [2024-07-15 20:57:59.985725] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.805 [2024-07-15 20:57:59.985749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.805 [2024-07-15 20:57:59.985810] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.805 [2024-07-15 20:57:59.985824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.805 [2024-07-15 20:57:59.985882] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES SOFTWARE PROGRESS MARKER cid:6 cdw10:00000080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.805 [2024-07-15 20:57:59.985899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.805 [2024-07-15 20:57:59.985953] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.805 [2024-07-15 20:57:59.985968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.805 #18 NEW cov: 12100 ft: 13754 corp: 8/144b lim: 35 exec/s: 0 rss: 70Mb L: 28/28 MS: 1 InsertRepeatedBytes- 00:07:32.805 [2024-07-15 20:58:00.035550] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.805 [2024-07-15 20:58:00.035576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.805 [2024-07-15 20:58:00.035635] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.805 [2024-07-15 20:58:00.035650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.805 #19 NEW cov: 12100 ft: 13847 corp: 9/164b lim: 35 exec/s: 0 rss: 70Mb L: 20/28 MS: 1 InsertByte- 00:07:32.805 [2024-07-15 20:58:00.075963] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.805 [2024-07-15 20:58:00.075989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.805 [2024-07-15 20:58:00.076049] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000e5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.805 [2024-07-15 20:58:00.076065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.805 [2024-07-15 20:58:00.076124] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.805 [2024-07-15 20:58:00.076137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.805 [2024-07-15 20:58:00.076195] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.805 [2024-07-15 20:58:00.076208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.064 #20 NEW cov: 12107 ft: 13938 corp: 10/194b lim: 35 exec/s: 0 rss: 70Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:07:33.064 [2024-07-15 20:58:00.115598] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.064 [2024-07-15 20:58:00.115624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.064 #21 NEW cov: 12107 ft: 14700 corp: 11/206b lim: 35 exec/s: 0 rss: 70Mb L: 12/30 MS: 1 CrossOver- 00:07:33.064 [2024-07-15 20:58:00.156172] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.064 [2024-07-15 20:58:00.156197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.064 [2024-07-15 20:58:00.156256] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.064 [2024-07-15 20:58:00.156270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.064 [2024-07-15 20:58:00.156328] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000e5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.064 [2024-07-15 20:58:00.156344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.064 [2024-07-15 20:58:00.156404] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.064 [2024-07-15 20:58:00.156418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.064 #22 NEW cov: 12107 ft: 14747 corp: 12/236b lim: 35 exec/s: 0 rss: 70Mb L: 30/30 MS: 1 CrossOver- 00:07:33.064 [2024-07-15 20:58:00.206251] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.064 [2024-07-15 20:58:00.206277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.064 [2024-07-15 20:58:00.206336] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.064 [2024-07-15 20:58:00.206353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.064 NEW_FUNC[1/2]: 0x4b28e0 in feat_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:273 00:07:33.064 NEW_FUNC[2/2]: 0x11e6110 in nvmf_ctrlr_set_features_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1603 00:07:33.064 #30 NEW cov: 12164 ft: 14828 corp: 13/262b lim: 35 exec/s: 0 rss: 71Mb L: 26/30 MS: 3 CrossOver-ChangeBinInt-CrossOver- 00:07:33.064 [2024-07-15 20:58:00.266180] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.064 [2024-07-15 20:58:00.266206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.064 [2024-07-15 20:58:00.266267] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.064 [2024-07-15 20:58:00.266283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.064 #31 NEW cov: 12164 ft: 14920 corp: 14/282b lim: 35 exec/s: 0 rss: 71Mb L: 20/30 MS: 1 InsertByte- 00:07:33.064 [2024-07-15 20:58:00.316143] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.064 [2024-07-15 20:58:00.316169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.064 NEW_FUNC[1/1]: 0x1a7d240 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:33.064 #32 NEW cov: 12187 ft: 15084 corp: 15/289b lim: 35 exec/s: 0 rss: 71Mb L: 7/30 MS: 1 EraseBytes- 00:07:33.325 [2024-07-15 20:58:00.366822] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.325 [2024-07-15 20:58:00.366848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.325 [2024-07-15 20:58:00.366908] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.325 [2024-07-15 20:58:00.366923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.325 [2024-07-15 20:58:00.366978] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:8000009a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.325 [2024-07-15 20:58:00.366994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.325 #33 NEW cov: 12187 ft: 15103 corp: 16/318b lim: 35 exec/s: 0 rss: 71Mb L: 29/30 MS: 1 InsertRepeatedBytes- 00:07:33.325 [2024-07-15 20:58:00.417145] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.325 [2024-07-15 20:58:00.417173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.325 [2024-07-15 20:58:00.417237] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.325 [2024-07-15 20:58:00.417251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.325 [2024-07-15 20:58:00.417310] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.325 [2024-07-15 20:58:00.417325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.326 [2024-07-15 20:58:00.417382] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:8000009a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.326 [2024-07-15 20:58:00.417399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:33.326 #34 NEW cov: 12187 ft: 15192 corp: 17/353b lim: 35 exec/s: 34 rss: 71Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:33.326 [2024-07-15 20:58:00.467052] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.326 [2024-07-15 20:58:00.467078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.326 [2024-07-15 20:58:00.467135] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.326 [2024-07-15 20:58:00.467149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.326 [2024-07-15 20:58:00.467206] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.326 [2024-07-15 20:58:00.467221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.326 [2024-07-15 20:58:00.467277] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.326 [2024-07-15 20:58:00.467291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.326 #35 NEW cov: 12187 ft: 15208 corp: 18/381b lim: 35 exec/s: 35 rss: 71Mb L: 28/35 MS: 1 CopyPart- 00:07:33.326 [2024-07-15 20:58:00.516905] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.326 [2024-07-15 20:58:00.516932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.326 [2024-07-15 20:58:00.516990] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.326 [2024-07-15 20:58:00.517005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.326 #36 NEW cov: 12187 ft: 15220 corp: 19/401b lim: 35 exec/s: 36 rss: 71Mb L: 20/35 MS: 1 ChangeBit- 00:07:33.326 [2024-07-15 20:58:00.557005] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.326 [2024-07-15 20:58:00.557031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.326 [2024-07-15 20:58:00.557090] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.326 [2024-07-15 20:58:00.557106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.326 #37 NEW cov: 12187 ft: 15223 corp: 20/421b lim: 35 exec/s: 37 rss: 71Mb L: 20/35 MS: 1 ChangeBinInt- 00:07:33.326 [2024-07-15 20:58:00.597328] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.326 [2024-07-15 20:58:00.597359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.326 [2024-07-15 20:58:00.597416] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.326 [2024-07-15 20:58:00.597431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.685 #38 NEW cov: 12187 ft: 15228 corp: 21/444b lim: 35 exec/s: 38 rss: 71Mb L: 23/35 MS: 1 CrossOver- 00:07:33.685 [2024-07-15 20:58:00.647456] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.685 [2024-07-15 20:58:00.647482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.685 [2024-07-15 20:58:00.647542] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.685 [2024-07-15 20:58:00.647556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.685 #39 NEW cov: 12187 ft: 15277 corp: 22/467b lim: 35 exec/s: 39 rss: 71Mb L: 23/35 MS: 1 CrossOver- 00:07:33.685 [2024-07-15 20:58:00.697553] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.685 [2024-07-15 20:58:00.697578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.685 [2024-07-15 20:58:00.697639] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.685 [2024-07-15 20:58:00.697654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.685 [2024-07-15 20:58:00.697714] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.685 [2024-07-15 20:58:00.697727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.685 #44 NEW cov: 12187 ft: 15280 corp: 23/494b lim: 35 exec/s: 44 rss: 71Mb L: 27/35 MS: 5 CrossOver-CopyPart-EraseBytes-CopyPart-CrossOver- 00:07:33.685 [2024-07-15 20:58:00.737697] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.685 [2024-07-15 20:58:00.737724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.685 [2024-07-15 20:58:00.737782] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.685 [2024-07-15 20:58:00.737796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.685 [2024-07-15 20:58:00.737854] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.685 [2024-07-15 20:58:00.737868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.685 #45 NEW cov: 12187 ft: 15297 corp: 24/515b lim: 35 exec/s: 45 rss: 71Mb L: 21/35 MS: 1 CopyPart- 00:07:33.685 [2024-07-15 20:58:00.777944] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.685 [2024-07-15 20:58:00.777969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.685 [2024-07-15 20:58:00.778031] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.685 [2024-07-15 20:58:00.778045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.685 [2024-07-15 20:58:00.778109] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.685 [2024-07-15 20:58:00.778126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.685 [2024-07-15 20:58:00.778184] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.685 [2024-07-15 20:58:00.778198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.685 #46 NEW cov: 12187 ft: 15299 corp: 25/544b lim: 35 exec/s: 46 rss: 72Mb L: 29/35 MS: 1 InsertByte- 00:07:33.685 [2024-07-15 20:58:00.827902] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.685 [2024-07-15 20:58:00.827928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.685 [2024-07-15 20:58:00.827988] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.685 [2024-07-15 20:58:00.828003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.685 [2024-07-15 20:58:00.828059] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000003f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.685 [2024-07-15 20:58:00.828073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.685 #47 NEW cov: 12187 ft: 15310 corp: 26/566b lim: 35 exec/s: 47 rss: 72Mb L: 22/35 MS: 1 InsertByte- 00:07:33.685 [2024-07-15 20:58:00.878255] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.685 [2024-07-15 20:58:00.878280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.685 [2024-07-15 20:58:00.878338] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000e5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.685 [2024-07-15 20:58:00.878354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.685 [2024-07-15 20:58:00.878412] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:000000e5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.685 [2024-07-15 20:58:00.878426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.685 [2024-07-15 20:58:00.878494] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.685 [2024-07-15 20:58:00.878514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.685 #48 NEW cov: 12187 ft: 15312 corp: 27/597b lim: 35 exec/s: 48 rss: 72Mb L: 31/35 MS: 1 InsertByte- 00:07:33.685 [2024-07-15 20:58:00.918194] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.685 [2024-07-15 20:58:00.918220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.685 [2024-07-15 20:58:00.918277] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.685 [2024-07-15 20:58:00.918291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.685 [2024-07-15 20:58:00.918349] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.685 [2024-07-15 20:58:00.918365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.949 #49 NEW cov: 12187 ft: 15341 corp: 28/624b lim: 35 exec/s: 49 rss: 72Mb L: 27/35 MS: 1 ChangeBit- 00:07:33.949 [2024-07-15 20:58:00.968342] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.949 [2024-07-15 20:58:00.968368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.949 [2024-07-15 20:58:00.968426] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.949 [2024-07-15 20:58:00.968447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.949 [2024-07-15 20:58:00.968501] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000003f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.949 [2024-07-15 20:58:00.968515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.949 #50 NEW cov: 12187 ft: 15378 corp: 29/646b lim: 35 exec/s: 50 rss: 72Mb L: 22/35 MS: 1 ChangeBit- 00:07:33.949 [2024-07-15 20:58:01.018285] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.949 [2024-07-15 20:58:01.018309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.949 [2024-07-15 20:58:01.018367] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.949 [2024-07-15 20:58:01.018382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.949 #51 NEW cov: 12187 ft: 15385 corp: 30/665b lim: 35 exec/s: 51 rss: 72Mb L: 19/35 MS: 1 CMP- DE: "\000@"- 00:07:33.949 [2024-07-15 20:58:01.058560] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.949 [2024-07-15 20:58:01.058585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.949 [2024-07-15 20:58:01.058646] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.949 [2024-07-15 20:58:01.058661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.949 [2024-07-15 20:58:01.058718] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.949 [2024-07-15 20:58:01.058732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.949 #52 NEW cov: 12187 ft: 15395 corp: 31/689b lim: 35 exec/s: 52 rss: 72Mb L: 24/35 MS: 1 InsertRepeatedBytes- 00:07:33.949 [2024-07-15 20:58:01.108700] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.949 [2024-07-15 20:58:01.108725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.949 [2024-07-15 20:58:01.108784] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.949 [2024-07-15 20:58:01.108798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.949 [2024-07-15 20:58:01.108854] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000003e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.949 [2024-07-15 20:58:01.108867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.949 #53 NEW cov: 12190 ft: 15552 corp: 32/711b lim: 35 exec/s: 53 rss: 72Mb L: 22/35 MS: 1 ChangeBit- 00:07:33.949 [2024-07-15 20:58:01.159022] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.949 [2024-07-15 20:58:01.159047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.949 [2024-07-15 20:58:01.159107] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.949 [2024-07-15 20:58:01.159121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.949 [2024-07-15 20:58:01.159177] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.949 [2024-07-15 20:58:01.159195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.949 [2024-07-15 20:58:01.159253] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.949 [2024-07-15 20:58:01.159266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.949 #54 NEW cov: 12190 ft: 15555 corp: 33/740b lim: 35 exec/s: 54 rss: 72Mb L: 29/35 MS: 1 ChangeBit- 00:07:33.949 [2024-07-15 20:58:01.208967] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.949 [2024-07-15 20:58:01.208992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.949 [2024-07-15 20:58:01.209047] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.949 [2024-07-15 20:58:01.209061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.949 [2024-07-15 20:58:01.209120] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.949 [2024-07-15 20:58:01.209134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.207 #55 NEW cov: 12190 ft: 15558 corp: 34/764b lim: 35 exec/s: 55 rss: 72Mb L: 24/35 MS: 1 PersAutoDict- DE: "\000@"- 00:07:34.207 [2024-07-15 20:58:01.258934] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.207 [2024-07-15 20:58:01.258958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.207 [2024-07-15 20:58:01.259021] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.207 [2024-07-15 20:58:01.259035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.207 #56 NEW cov: 12190 ft: 15565 corp: 35/780b lim: 35 exec/s: 56 rss: 72Mb L: 16/35 MS: 1 EraseBytes- 00:07:34.207 [2024-07-15 20:58:01.299017] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.207 [2024-07-15 20:58:01.299042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.207 [2024-07-15 20:58:01.299101] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.207 [2024-07-15 20:58:01.299115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.207 #57 NEW cov: 12190 ft: 15569 corp: 36/794b lim: 35 exec/s: 57 rss: 72Mb L: 14/35 MS: 1 EraseBytes- 00:07:34.207 NEW_FUNC[1/2]: 0x4b9410 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:34.207 NEW_FUNC[2/2]: 0x11eda70 in nvmf_ctrlr_set_features_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1765 00:07:34.207 #59 NEW cov: 12223 ft: 15608 corp: 37/803b lim: 35 exec/s: 59 rss: 72Mb L: 9/35 MS: 2 InsertByte-CrossOver- 00:07:34.207 [2024-07-15 20:58:01.379248] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.207 [2024-07-15 20:58:01.379273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.207 [2024-07-15 20:58:01.379334] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.207 [2024-07-15 20:58:01.379348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.207 [2024-07-15 20:58:01.419344] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.207 [2024-07-15 20:58:01.419368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.207 [2024-07-15 20:58:01.419427] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.207 [2024-07-15 20:58:01.419460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.207 #61 NEW cov: 12223 ft: 15627 corp: 38/823b lim: 35 exec/s: 30 rss: 72Mb L: 20/35 MS: 2 InsertByte-ShuffleBytes- 00:07:34.207 #61 DONE cov: 12223 ft: 15627 corp: 38/823b lim: 35 exec/s: 30 rss: 72Mb 00:07:34.207 ###### Recommended dictionary. ###### 00:07:34.207 "\000@" # Uses: 1 00:07:34.207 ###### End of recommended dictionary. ###### 00:07:34.207 Done 61 runs in 2 second(s) 00:07:34.465 20:58:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:07:34.465 20:58:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:34.465 20:58:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:34.465 20:58:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:07:34.465 20:58:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:07:34.465 20:58:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:34.465 20:58:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:34.465 20:58:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:34.465 20:58:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:07:34.465 20:58:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:34.465 20:58:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:34.465 20:58:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:07:34.465 20:58:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:07:34.465 20:58:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:34.465 20:58:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:07:34.465 20:58:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:34.465 20:58:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:34.465 20:58:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:34.465 20:58:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:07:34.465 [2024-07-15 20:58:01.606438] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:34.465 [2024-07-15 20:58:01.606526] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid787295 ] 00:07:34.465 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.724 [2024-07-15 20:58:01.786753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.724 [2024-07-15 20:58:01.853494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.724 [2024-07-15 20:58:01.912761] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:34.724 [2024-07-15 20:58:01.929075] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:07:34.724 INFO: Running with entropic power schedule (0xFF, 100). 00:07:34.724 INFO: Seed: 1659673505 00:07:34.724 INFO: Loaded 1 modules (358191 inline 8-bit counters): 358191 [0x29b254c, 0x2a09c7b), 00:07:34.724 INFO: Loaded 1 PC tables (358191 PCs): 358191 [0x2a09c80,0x2f80f70), 00:07:34.724 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:34.724 INFO: A corpus is not provided, starting from an empty corpus 00:07:34.724 #2 INITED exec/s: 0 rss: 64Mb 00:07:34.724 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:34.724 This may also happen if the target rejected all inputs we tried so far 00:07:35.240 NEW_FUNC[1/683]: 0x499490 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:07:35.240 NEW_FUNC[2/683]: 0x4b9410 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:35.240 #7 NEW cov: 11752 ft: 11747 corp: 2/8b lim: 35 exec/s: 0 rss: 70Mb L: 7/7 MS: 5 CrossOver-CopyPart-CMP-CrossOver-CrossOver- DE: "\377\377\377h"- 00:07:35.240 #8 NEW cov: 11882 ft: 12347 corp: 3/15b lim: 35 exec/s: 0 rss: 70Mb L: 7/7 MS: 1 CopyPart- 00:07:35.240 #10 NEW cov: 11888 ft: 12684 corp: 4/23b lim: 35 exec/s: 0 rss: 70Mb L: 8/8 MS: 2 CMP-CrossOver- DE: "\001\000"- 00:07:35.240 #11 NEW cov: 11973 ft: 12978 corp: 5/31b lim: 35 exec/s: 0 rss: 71Mb L: 8/8 MS: 1 CMP- DE: "\377\007"- 00:07:35.240 #12 NEW cov: 11973 ft: 13046 corp: 6/38b lim: 35 exec/s: 0 rss: 71Mb L: 7/8 MS: 1 ChangeBinInt- 00:07:35.499 #13 NEW cov: 11973 ft: 13104 corp: 7/45b lim: 35 exec/s: 0 rss: 71Mb L: 7/8 MS: 1 ChangeByte- 00:07:35.499 #14 NEW cov: 11973 ft: 13237 corp: 8/53b lim: 35 exec/s: 0 rss: 71Mb L: 8/8 MS: 1 CrossOver- 00:07:35.499 [2024-07-15 20:58:02.605905] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.499 [2024-07-15 20:58:02.605939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.499 NEW_FUNC[1/14]: 0x1797b00 in spdk_nvme_print_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:263 00:07:35.499 NEW_FUNC[2/14]: 0x1797d40 in nvme_admin_qpair_print_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:202 00:07:35.499 #15 NEW cov: 12102 ft: 13419 corp: 9/62b lim: 35 exec/s: 0 rss: 71Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:07:35.499 #16 NEW cov: 12102 ft: 13580 corp: 10/74b lim: 35 exec/s: 0 rss: 71Mb L: 12/12 MS: 1 CrossOver- 00:07:35.499 #17 NEW cov: 12102 ft: 13681 corp: 11/82b lim: 35 exec/s: 0 rss: 71Mb L: 8/12 MS: 1 ShuffleBytes- 00:07:35.499 #18 NEW cov: 12102 ft: 14047 corp: 12/96b lim: 35 exec/s: 0 rss: 71Mb L: 14/14 MS: 1 PersAutoDict- DE: "\001\000"- 00:07:35.757 #19 NEW cov: 12102 ft: 14065 corp: 13/104b lim: 35 exec/s: 0 rss: 71Mb L: 8/14 MS: 1 EraseBytes- 00:07:35.757 NEW_FUNC[1/1]: 0x4b28e0 in feat_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:273 00:07:35.757 #20 NEW cov: 12140 ft: 14165 corp: 14/113b lim: 35 exec/s: 0 rss: 71Mb L: 9/14 MS: 1 PersAutoDict- DE: "\001\000"- 00:07:35.758 NEW_FUNC[1/1]: 0x1a7d240 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:35.758 #21 NEW cov: 12163 ft: 14196 corp: 15/121b lim: 35 exec/s: 0 rss: 71Mb L: 8/14 MS: 1 ShuffleBytes- 00:07:35.758 #22 NEW cov: 12163 ft: 14204 corp: 16/128b lim: 35 exec/s: 0 rss: 71Mb L: 7/14 MS: 1 CopyPart- 00:07:35.758 #23 NEW cov: 12163 ft: 14300 corp: 17/135b lim: 35 exec/s: 0 rss: 71Mb L: 7/14 MS: 1 EraseBytes- 00:07:35.758 #24 NEW cov: 12163 ft: 14316 corp: 18/142b lim: 35 exec/s: 24 rss: 72Mb L: 7/14 MS: 1 EraseBytes- 00:07:36.016 #25 NEW cov: 12163 ft: 14341 corp: 19/151b lim: 35 exec/s: 25 rss: 72Mb L: 9/14 MS: 1 CMP- DE: "\377\007"- 00:07:36.016 #26 NEW cov: 12163 ft: 14357 corp: 20/160b lim: 35 exec/s: 26 rss: 72Mb L: 9/14 MS: 1 CrossOver- 00:07:36.016 [2024-07-15 20:58:03.127555] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007e8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.016 [2024-07-15 20:58:03.127582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.016 #27 NEW cov: 12163 ft: 14440 corp: 21/175b lim: 35 exec/s: 27 rss: 72Mb L: 15/15 MS: 1 InsertRepeatedBytes- 00:07:36.016 [2024-07-15 20:58:03.177714] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007e8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.016 [2024-07-15 20:58:03.177740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.016 #28 NEW cov: 12163 ft: 14480 corp: 22/189b lim: 35 exec/s: 28 rss: 72Mb L: 14/15 MS: 1 EraseBytes- 00:07:36.016 [2024-07-15 20:58:03.228054] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.016 [2024-07-15 20:58:03.228078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.016 [2024-07-15 20:58:03.228155] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007f9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.016 [2024-07-15 20:58:03.228169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.016 [2024-07-15 20:58:03.228231] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007f9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.016 [2024-07-15 20:58:03.228244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.016 [2024-07-15 20:58:03.228306] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007f9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.016 [2024-07-15 20:58:03.228320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.016 #33 NEW cov: 12163 ft: 14978 corp: 23/222b lim: 35 exec/s: 33 rss: 72Mb L: 33/33 MS: 5 CrossOver-ChangeByte-ChangeBit-PersAutoDict-InsertRepeatedBytes- DE: "\377\007"- 00:07:36.016 [2024-07-15 20:58:03.267782] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.016 [2024-07-15 20:58:03.267806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.016 #34 NEW cov: 12163 ft: 14987 corp: 24/231b lim: 35 exec/s: 34 rss: 72Mb L: 9/33 MS: 1 ChangeBit- 00:07:36.274 [2024-07-15 20:58:03.308115] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.274 [2024-07-15 20:58:03.308158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.274 #35 NEW cov: 12163 ft: 14993 corp: 25/246b lim: 35 exec/s: 35 rss: 72Mb L: 15/33 MS: 1 InsertByte- 00:07:36.274 [2024-07-15 20:58:03.358249] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.274 [2024-07-15 20:58:03.358275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.274 #36 NEW cov: 12163 ft: 15011 corp: 26/261b lim: 35 exec/s: 36 rss: 72Mb L: 15/33 MS: 1 ChangeByte- 00:07:36.274 [2024-07-15 20:58:03.408403] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.274 [2024-07-15 20:58:03.408429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.274 #37 NEW cov: 12163 ft: 15027 corp: 27/276b lim: 35 exec/s: 37 rss: 72Mb L: 15/33 MS: 1 CopyPart- 00:07:36.274 #38 NEW cov: 12163 ft: 15070 corp: 28/287b lim: 35 exec/s: 38 rss: 72Mb L: 11/33 MS: 1 InsertRepeatedBytes- 00:07:36.274 #39 NEW cov: 12163 ft: 15084 corp: 29/295b lim: 35 exec/s: 39 rss: 72Mb L: 8/33 MS: 1 ChangeBit- 00:07:36.532 #40 NEW cov: 12163 ft: 15096 corp: 30/305b lim: 35 exec/s: 40 rss: 73Mb L: 10/33 MS: 1 InsertRepeatedBytes- 00:07:36.532 #41 NEW cov: 12163 ft: 15162 corp: 31/314b lim: 35 exec/s: 41 rss: 73Mb L: 9/33 MS: 1 ChangeBinInt- 00:07:36.532 [2024-07-15 20:58:03.659107] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.532 [2024-07-15 20:58:03.659134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.532 #42 NEW cov: 12163 ft: 15181 corp: 32/331b lim: 35 exec/s: 42 rss: 73Mb L: 17/33 MS: 1 PersAutoDict- DE: "\001\000"- 00:07:36.532 #43 NEW cov: 12163 ft: 15189 corp: 33/338b lim: 35 exec/s: 43 rss: 73Mb L: 7/33 MS: 1 ChangeBinInt- 00:07:36.532 #44 NEW cov: 12163 ft: 15221 corp: 34/348b lim: 35 exec/s: 44 rss: 73Mb L: 10/33 MS: 1 ChangeByte- 00:07:36.532 [2024-07-15 20:58:03.779501] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.532 [2024-07-15 20:58:03.779527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.532 [2024-07-15 20:58:03.779607] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.532 [2024-07-15 20:58:03.779621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.532 [2024-07-15 20:58:03.779683] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.532 [2024-07-15 20:58:03.779698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.532 #45 NEW cov: 12163 ft: 15416 corp: 35/372b lim: 35 exec/s: 45 rss: 73Mb L: 24/33 MS: 1 InsertRepeatedBytes- 00:07:36.791 [2024-07-15 20:58:03.829367] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.791 [2024-07-15 20:58:03.829392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.791 #46 NEW cov: 12163 ft: 15450 corp: 36/382b lim: 35 exec/s: 46 rss: 73Mb L: 10/33 MS: 1 CrossOver- 00:07:36.791 #47 NEW cov: 12163 ft: 15454 corp: 37/390b lim: 35 exec/s: 47 rss: 73Mb L: 8/33 MS: 1 CrossOver- 00:07:36.791 #48 NEW cov: 12163 ft: 15455 corp: 38/398b lim: 35 exec/s: 48 rss: 73Mb L: 8/33 MS: 1 ChangeBit- 00:07:36.791 [2024-07-15 20:58:03.969921] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.791 [2024-07-15 20:58:03.969947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.791 #49 NEW cov: 12163 ft: 15464 corp: 39/415b lim: 35 exec/s: 24 rss: 73Mb L: 17/33 MS: 1 CopyPart- 00:07:36.791 #49 DONE cov: 12163 ft: 15464 corp: 39/415b lim: 35 exec/s: 24 rss: 73Mb 00:07:36.791 ###### Recommended dictionary. ###### 00:07:36.791 "\377\377\377h" # Uses: 0 00:07:36.791 "\001\000" # Uses: 3 00:07:36.791 "\377\007" # Uses: 1 00:07:36.791 ###### End of recommended dictionary. ###### 00:07:36.791 Done 49 runs in 2 second(s) 00:07:37.049 20:58:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:07:37.049 20:58:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:37.049 20:58:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:37.049 20:58:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:07:37.049 20:58:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:07:37.049 20:58:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:37.049 20:58:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:37.049 20:58:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:37.049 20:58:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:07:37.049 20:58:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:37.049 20:58:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:37.049 20:58:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:07:37.049 20:58:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:07:37.049 20:58:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:37.049 20:58:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:07:37.049 20:58:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:37.049 20:58:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:37.049 20:58:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:37.049 20:58:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:07:37.049 [2024-07-15 20:58:04.171220] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:37.049 [2024-07-15 20:58:04.171290] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid787830 ] 00:07:37.049 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.308 [2024-07-15 20:58:04.346840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.308 [2024-07-15 20:58:04.413505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.308 [2024-07-15 20:58:04.472251] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:37.308 [2024-07-15 20:58:04.488562] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:07:37.308 INFO: Running with entropic power schedule (0xFF, 100). 00:07:37.308 INFO: Seed: 4217648283 00:07:37.308 INFO: Loaded 1 modules (358191 inline 8-bit counters): 358191 [0x29b254c, 0x2a09c7b), 00:07:37.308 INFO: Loaded 1 PC tables (358191 PCs): 358191 [0x2a09c80,0x2f80f70), 00:07:37.308 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:37.308 INFO: A corpus is not provided, starting from an empty corpus 00:07:37.308 #2 INITED exec/s: 0 rss: 63Mb 00:07:37.308 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:37.308 This may also happen if the target rejected all inputs we tried so far 00:07:37.308 [2024-07-15 20:58:04.536229] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7595718146397306985 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.308 [2024-07-15 20:58:04.536263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.825 NEW_FUNC[1/697]: 0x49a940 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:07:37.825 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:37.825 #6 NEW cov: 11971 ft: 11972 corp: 2/41b lim: 105 exec/s: 0 rss: 70Mb L: 40/40 MS: 4 ShuffleBytes-InsertRepeatedBytes-ChangeBit-InsertRepeatedBytes- 00:07:37.825 [2024-07-15 20:58:04.887100] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7595718146397306985 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.825 [2024-07-15 20:58:04.887141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.825 #7 NEW cov: 12101 ft: 12555 corp: 3/81b lim: 105 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 ChangeBinInt- 00:07:37.825 [2024-07-15 20:58:04.967181] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:33554432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.825 [2024-07-15 20:58:04.967214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.825 #17 NEW cov: 12107 ft: 12713 corp: 4/107b lim: 105 exec/s: 0 rss: 70Mb L: 26/40 MS: 5 CopyPart-InsertByte-EraseBytes-ChangeBit-InsertRepeatedBytes- 00:07:37.825 [2024-07-15 20:58:05.017295] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:33554432 len:32257 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.825 [2024-07-15 20:58:05.017326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.825 #18 NEW cov: 12192 ft: 12916 corp: 5/133b lim: 105 exec/s: 0 rss: 70Mb L: 26/40 MS: 1 ChangeByte- 00:07:37.825 [2024-07-15 20:58:05.097559] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7595718146397306985 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.825 [2024-07-15 20:58:05.097590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.084 #19 NEW cov: 12192 ft: 13143 corp: 6/173b lim: 105 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 ChangeBinInt- 00:07:38.084 [2024-07-15 20:58:05.177801] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:33554432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.084 [2024-07-15 20:58:05.177833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.084 #20 NEW cov: 12192 ft: 13225 corp: 7/203b lim: 105 exec/s: 0 rss: 71Mb L: 30/40 MS: 1 CrossOver- 00:07:38.084 [2024-07-15 20:58:05.257960] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7566047374150205545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.084 [2024-07-15 20:58:05.257991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.084 #21 NEW cov: 12192 ft: 13278 corp: 8/225b lim: 105 exec/s: 0 rss: 71Mb L: 22/40 MS: 1 EraseBytes- 00:07:38.084 [2024-07-15 20:58:05.308226] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:14251014049101104581 len:50630 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.084 [2024-07-15 20:58:05.308257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.084 [2024-07-15 20:58:05.308289] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:14251014049101104581 len:50630 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.084 [2024-07-15 20:58:05.308307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.084 [2024-07-15 20:58:05.308339] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:14251014049101104581 len:50630 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.084 [2024-07-15 20:58:05.308355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.084 [2024-07-15 20:58:05.308388] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:14251014049101104581 len:50630 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.084 [2024-07-15 20:58:05.308404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.084 #22 NEW cov: 12192 ft: 13998 corp: 9/323b lim: 105 exec/s: 0 rss: 71Mb L: 98/98 MS: 1 InsertRepeatedBytes- 00:07:38.084 [2024-07-15 20:58:05.368246] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:33554432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.084 [2024-07-15 20:58:05.368277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.344 #23 NEW cov: 12192 ft: 14145 corp: 10/349b lim: 105 exec/s: 0 rss: 71Mb L: 26/98 MS: 1 CrossOver- 00:07:38.344 [2024-07-15 20:58:05.418353] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7595718146397306985 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.344 [2024-07-15 20:58:05.418385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.344 NEW_FUNC[1/1]: 0x1a7d240 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:38.344 #24 NEW cov: 12209 ft: 14269 corp: 11/390b lim: 105 exec/s: 0 rss: 71Mb L: 41/98 MS: 1 InsertByte- 00:07:38.344 [2024-07-15 20:58:05.498655] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7595718146397306985 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.344 [2024-07-15 20:58:05.498686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.344 [2024-07-15 20:58:05.498719] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:7595718147998050409 len:26978 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.344 [2024-07-15 20:58:05.498737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.344 [2024-07-15 20:58:05.498768] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:35185160617984 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.344 [2024-07-15 20:58:05.498800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.344 #25 NEW cov: 12209 ft: 14569 corp: 12/464b lim: 105 exec/s: 25 rss: 71Mb L: 74/98 MS: 1 CrossOver- 00:07:38.344 [2024-07-15 20:58:05.558696] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:33554621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.344 [2024-07-15 20:58:05.558726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.344 #26 NEW cov: 12209 ft: 14593 corp: 13/491b lim: 105 exec/s: 26 rss: 71Mb L: 27/98 MS: 1 InsertByte- 00:07:38.344 [2024-07-15 20:58:05.608856] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:33554621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.344 [2024-07-15 20:58:05.608885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.603 #27 NEW cov: 12209 ft: 14669 corp: 14/518b lim: 105 exec/s: 27 rss: 71Mb L: 27/98 MS: 1 ChangeBinInt- 00:07:38.603 [2024-07-15 20:58:05.689079] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7577703747887825001 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.603 [2024-07-15 20:58:05.689109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.603 #28 NEW cov: 12209 ft: 14679 corp: 15/558b lim: 105 exec/s: 28 rss: 71Mb L: 40/98 MS: 1 ChangeBit- 00:07:38.603 [2024-07-15 20:58:05.739157] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7566047374150205545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.603 [2024-07-15 20:58:05.739190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.603 #29 NEW cov: 12209 ft: 14704 corp: 16/581b lim: 105 exec/s: 29 rss: 71Mb L: 23/98 MS: 1 InsertByte- 00:07:38.603 [2024-07-15 20:58:05.819525] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7577703747887825001 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.603 [2024-07-15 20:58:05.819555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.603 [2024-07-15 20:58:05.819586] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1768488960 len:33 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.603 [2024-07-15 20:58:05.819604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.603 [2024-07-15 20:58:05.819637] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:7523660553955928425 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.603 [2024-07-15 20:58:05.819668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.603 #30 NEW cov: 12209 ft: 14730 corp: 17/658b lim: 105 exec/s: 30 rss: 71Mb L: 77/98 MS: 1 CopyPart- 00:07:38.863 [2024-07-15 20:58:05.899736] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7595718146397306985 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.863 [2024-07-15 20:58:05.899766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.863 [2024-07-15 20:58:05.899799] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:7595718341271578729 len:26978 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.863 [2024-07-15 20:58:05.899817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.863 [2024-07-15 20:58:05.899848] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:35185160617984 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.863 [2024-07-15 20:58:05.899865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.863 #31 NEW cov: 12209 ft: 14803 corp: 18/732b lim: 105 exec/s: 31 rss: 71Mb L: 74/98 MS: 1 ChangeByte- 00:07:38.863 [2024-07-15 20:58:05.979856] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7595718146397306985 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.863 [2024-07-15 20:58:05.979886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.863 #32 NEW cov: 12209 ft: 14873 corp: 19/772b lim: 105 exec/s: 32 rss: 71Mb L: 40/98 MS: 1 ChangeBinInt- 00:07:38.863 [2024-07-15 20:58:06.029939] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7595718146397306985 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.863 [2024-07-15 20:58:06.029968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.863 #33 NEW cov: 12209 ft: 14890 corp: 20/813b lim: 105 exec/s: 33 rss: 72Mb L: 41/98 MS: 1 ChangeBinInt- 00:07:38.863 [2024-07-15 20:58:06.110148] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:33554432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.863 [2024-07-15 20:58:06.110177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.122 #34 NEW cov: 12209 ft: 14903 corp: 21/838b lim: 105 exec/s: 34 rss: 72Mb L: 25/98 MS: 1 EraseBytes- 00:07:39.122 [2024-07-15 20:58:06.190401] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:34393292989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.122 [2024-07-15 20:58:06.190434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.122 #35 NEW cov: 12209 ft: 14932 corp: 22/865b lim: 105 exec/s: 35 rss: 72Mb L: 27/98 MS: 1 ChangeBinInt- 00:07:39.122 [2024-07-15 20:58:06.270703] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7595718146397306985 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.122 [2024-07-15 20:58:06.270732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.122 [2024-07-15 20:58:06.270765] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1768488960 len:33 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.122 [2024-07-15 20:58:06.270782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.122 [2024-07-15 20:58:06.270814] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.122 [2024-07-15 20:58:06.270830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.122 #36 NEW cov: 12209 ft: 14948 corp: 23/930b lim: 105 exec/s: 36 rss: 72Mb L: 65/98 MS: 1 CrossOver- 00:07:39.122 [2024-07-15 20:58:06.330744] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:34393292989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.122 [2024-07-15 20:58:06.330774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.122 #37 NEW cov: 12209 ft: 14959 corp: 24/957b lim: 105 exec/s: 37 rss: 72Mb L: 27/98 MS: 1 ShuffleBytes- 00:07:39.122 [2024-07-15 20:58:06.410957] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:33554432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.122 [2024-07-15 20:58:06.410987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.380 #38 NEW cov: 12216 ft: 14986 corp: 25/987b lim: 105 exec/s: 38 rss: 72Mb L: 30/98 MS: 1 CMP- DE: "\005\000\000\000"- 00:07:39.380 [2024-07-15 20:58:06.491155] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:33554432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.380 [2024-07-15 20:58:06.491185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.380 #39 NEW cov: 12216 ft: 15005 corp: 26/1020b lim: 105 exec/s: 19 rss: 72Mb L: 33/98 MS: 1 CopyPart- 00:07:39.380 #39 DONE cov: 12216 ft: 15005 corp: 26/1020b lim: 105 exec/s: 19 rss: 72Mb 00:07:39.380 ###### Recommended dictionary. ###### 00:07:39.380 "\005\000\000\000" # Uses: 0 00:07:39.380 ###### End of recommended dictionary. ###### 00:07:39.380 Done 39 runs in 2 second(s) 00:07:39.637 20:58:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:07:39.637 20:58:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:39.637 20:58:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:39.637 20:58:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:07:39.637 20:58:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:07:39.637 20:58:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:39.637 20:58:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:39.637 20:58:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:39.637 20:58:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:07:39.637 20:58:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:39.637 20:58:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:39.637 20:58:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:07:39.637 20:58:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:07:39.637 20:58:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:39.637 20:58:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:07:39.637 20:58:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:39.637 20:58:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:39.637 20:58:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:39.637 20:58:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:07:39.637 [2024-07-15 20:58:06.725941] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:39.637 [2024-07-15 20:58:06.726038] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid788251 ] 00:07:39.637 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.637 [2024-07-15 20:58:06.928195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.896 [2024-07-15 20:58:06.994633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.896 [2024-07-15 20:58:07.053552] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.896 [2024-07-15 20:58:07.069843] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:07:39.896 INFO: Running with entropic power schedule (0xFF, 100). 00:07:39.896 INFO: Seed: 2505681653 00:07:39.896 INFO: Loaded 1 modules (358191 inline 8-bit counters): 358191 [0x29b254c, 0x2a09c7b), 00:07:39.896 INFO: Loaded 1 PC tables (358191 PCs): 358191 [0x2a09c80,0x2f80f70), 00:07:39.896 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:39.896 INFO: A corpus is not provided, starting from an empty corpus 00:07:39.896 #2 INITED exec/s: 0 rss: 63Mb 00:07:39.896 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:39.896 This may also happen if the target rejected all inputs we tried so far 00:07:39.896 [2024-07-15 20:58:07.125386] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.896 [2024-07-15 20:58:07.125414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.896 [2024-07-15 20:58:07.125453] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.896 [2024-07-15 20:58:07.125465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.896 [2024-07-15 20:58:07.125498] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.896 [2024-07-15 20:58:07.125509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.896 [2024-07-15 20:58:07.125526] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.896 [2024-07-15 20:58:07.125537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.154 NEW_FUNC[1/698]: 0x49dcc0 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:07:40.154 NEW_FUNC[2/698]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:40.154 #3 NEW cov: 11993 ft: 11986 corp: 2/105b lim: 120 exec/s: 0 rss: 70Mb L: 104/104 MS: 1 InsertRepeatedBytes- 00:07:40.414 [2024-07-15 20:58:07.456255] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.414 [2024-07-15 20:58:07.456294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.414 [2024-07-15 20:58:07.456363] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.414 [2024-07-15 20:58:07.456382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.414 [2024-07-15 20:58:07.456437] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.414 [2024-07-15 20:58:07.456460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.414 [2024-07-15 20:58:07.456518] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446462598732840959 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.414 [2024-07-15 20:58:07.456535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.414 #4 NEW cov: 12123 ft: 12552 corp: 3/209b lim: 120 exec/s: 0 rss: 70Mb L: 104/104 MS: 1 ChangeBit- 00:07:40.414 [2024-07-15 20:58:07.506319] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.414 [2024-07-15 20:58:07.506345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.414 [2024-07-15 20:58:07.506391] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.414 [2024-07-15 20:58:07.506412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.414 [2024-07-15 20:58:07.506464] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.414 [2024-07-15 20:58:07.506496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.414 [2024-07-15 20:58:07.506549] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446462598732840959 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.414 [2024-07-15 20:58:07.506565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.414 #5 NEW cov: 12129 ft: 12777 corp: 4/315b lim: 120 exec/s: 0 rss: 70Mb L: 106/106 MS: 1 CMP- DE: "\016\000"- 00:07:40.414 [2024-07-15 20:58:07.556466] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.414 [2024-07-15 20:58:07.556494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.414 [2024-07-15 20:58:07.556556] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.414 [2024-07-15 20:58:07.556572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.414 [2024-07-15 20:58:07.556626] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.414 [2024-07-15 20:58:07.556641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.414 [2024-07-15 20:58:07.556693] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446462598732840959 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.414 [2024-07-15 20:58:07.556709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.414 #6 NEW cov: 12214 ft: 13003 corp: 5/421b lim: 120 exec/s: 0 rss: 70Mb L: 106/106 MS: 1 ShuffleBytes- 00:07:40.414 [2024-07-15 20:58:07.606622] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.414 [2024-07-15 20:58:07.606650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.414 [2024-07-15 20:58:07.606699] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.414 [2024-07-15 20:58:07.606718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.415 [2024-07-15 20:58:07.606769] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.415 [2024-07-15 20:58:07.606785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.415 [2024-07-15 20:58:07.606839] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.415 [2024-07-15 20:58:07.606855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.415 #12 NEW cov: 12214 ft: 13297 corp: 6/534b lim: 120 exec/s: 0 rss: 70Mb L: 113/113 MS: 1 InsertRepeatedBytes- 00:07:40.415 [2024-07-15 20:58:07.646731] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.415 [2024-07-15 20:58:07.646758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.415 [2024-07-15 20:58:07.646822] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.415 [2024-07-15 20:58:07.646838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.415 [2024-07-15 20:58:07.646890] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.415 [2024-07-15 20:58:07.646904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.415 [2024-07-15 20:58:07.646958] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:17361641481138401520 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.415 [2024-07-15 20:58:07.646973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.415 #13 NEW cov: 12214 ft: 13324 corp: 7/651b lim: 120 exec/s: 0 rss: 70Mb L: 117/117 MS: 1 InsertRepeatedBytes- 00:07:40.415 [2024-07-15 20:58:07.686821] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.415 [2024-07-15 20:58:07.686848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.415 [2024-07-15 20:58:07.686887] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.415 [2024-07-15 20:58:07.686903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.415 [2024-07-15 20:58:07.686954] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.415 [2024-07-15 20:58:07.686969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.415 [2024-07-15 20:58:07.687022] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446462598732840959 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.415 [2024-07-15 20:58:07.687037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.674 #14 NEW cov: 12214 ft: 13365 corp: 8/759b lim: 120 exec/s: 0 rss: 70Mb L: 108/117 MS: 1 CrossOver- 00:07:40.674 [2024-07-15 20:58:07.726916] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.674 [2024-07-15 20:58:07.726943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.674 [2024-07-15 20:58:07.726992] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.674 [2024-07-15 20:58:07.727008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.674 [2024-07-15 20:58:07.727076] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.674 [2024-07-15 20:58:07.727092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.674 [2024-07-15 20:58:07.727143] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:17361641481138401520 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.674 [2024-07-15 20:58:07.727158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.674 #15 NEW cov: 12214 ft: 13487 corp: 9/876b lim: 120 exec/s: 0 rss: 70Mb L: 117/117 MS: 1 ShuffleBytes- 00:07:40.674 [2024-07-15 20:58:07.777035] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.674 [2024-07-15 20:58:07.777062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.674 [2024-07-15 20:58:07.777123] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.674 [2024-07-15 20:58:07.777139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.674 [2024-07-15 20:58:07.777192] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.674 [2024-07-15 20:58:07.777207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.674 [2024-07-15 20:58:07.777258] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:17361641481138401520 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.674 [2024-07-15 20:58:07.777273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.674 #16 NEW cov: 12214 ft: 13524 corp: 10/993b lim: 120 exec/s: 0 rss: 70Mb L: 117/117 MS: 1 ShuffleBytes- 00:07:40.674 [2024-07-15 20:58:07.817163] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.674 [2024-07-15 20:58:07.817189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.674 [2024-07-15 20:58:07.817256] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.674 [2024-07-15 20:58:07.817272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.674 [2024-07-15 20:58:07.817324] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.674 [2024-07-15 20:58:07.817339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.674 [2024-07-15 20:58:07.817390] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:17361641481138401520 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.674 [2024-07-15 20:58:07.817406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.674 #17 NEW cov: 12214 ft: 13600 corp: 11/1110b lim: 120 exec/s: 0 rss: 70Mb L: 117/117 MS: 1 ShuffleBytes- 00:07:40.674 [2024-07-15 20:58:07.866845] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.674 [2024-07-15 20:58:07.866870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.674 #18 NEW cov: 12214 ft: 14518 corp: 12/1154b lim: 120 exec/s: 0 rss: 70Mb L: 44/117 MS: 1 InsertRepeatedBytes- 00:07:40.674 [2024-07-15 20:58:07.907408] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.674 [2024-07-15 20:58:07.907434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.674 [2024-07-15 20:58:07.907506] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709547519 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.674 [2024-07-15 20:58:07.907522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.674 [2024-07-15 20:58:07.907583] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.674 [2024-07-15 20:58:07.907598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.674 [2024-07-15 20:58:07.907650] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:17361641481138401520 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.674 [2024-07-15 20:58:07.907666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.674 #19 NEW cov: 12214 ft: 14553 corp: 13/1271b lim: 120 exec/s: 0 rss: 70Mb L: 117/117 MS: 1 ChangeBit- 00:07:40.674 [2024-07-15 20:58:07.947512] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.674 [2024-07-15 20:58:07.947539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.674 [2024-07-15 20:58:07.947604] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.674 [2024-07-15 20:58:07.947621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.674 [2024-07-15 20:58:07.947685] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.674 [2024-07-15 20:58:07.947700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.674 [2024-07-15 20:58:07.947753] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.674 [2024-07-15 20:58:07.947769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.934 #20 NEW cov: 12214 ft: 14576 corp: 14/1388b lim: 120 exec/s: 0 rss: 70Mb L: 117/117 MS: 1 CrossOver- 00:07:40.934 [2024-07-15 20:58:07.997649] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.934 [2024-07-15 20:58:07.997677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.934 [2024-07-15 20:58:07.997715] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.934 [2024-07-15 20:58:07.997730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.934 [2024-07-15 20:58:07.997799] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.934 [2024-07-15 20:58:07.997814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.934 [2024-07-15 20:58:07.997866] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:17361641481138401520 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.934 [2024-07-15 20:58:07.997881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.934 NEW_FUNC[1/1]: 0x1a7d240 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:40.934 #21 NEW cov: 12237 ft: 14600 corp: 15/1506b lim: 120 exec/s: 0 rss: 71Mb L: 118/118 MS: 1 InsertByte- 00:07:40.934 [2024-07-15 20:58:08.037945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.934 [2024-07-15 20:58:08.037973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.934 [2024-07-15 20:58:08.038040] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.934 [2024-07-15 20:58:08.038055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.934 [2024-07-15 20:58:08.038108] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.934 [2024-07-15 20:58:08.038123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.934 [2024-07-15 20:58:08.038176] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:17361641481138401520 len:61681 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.934 [2024-07-15 20:58:08.038192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.934 [2024-07-15 20:58:08.038246] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.934 [2024-07-15 20:58:08.038265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:40.934 #22 NEW cov: 12237 ft: 14672 corp: 16/1626b lim: 120 exec/s: 0 rss: 71Mb L: 120/120 MS: 1 CopyPart- 00:07:40.934 [2024-07-15 20:58:08.087949] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.934 [2024-07-15 20:58:08.087975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.934 [2024-07-15 20:58:08.088021] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.934 [2024-07-15 20:58:08.088042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.934 [2024-07-15 20:58:08.088092] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.934 [2024-07-15 20:58:08.088108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.934 [2024-07-15 20:58:08.088158] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.934 [2024-07-15 20:58:08.088173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.934 #23 NEW cov: 12237 ft: 14699 corp: 17/1740b lim: 120 exec/s: 23 rss: 71Mb L: 114/120 MS: 1 InsertByte- 00:07:40.934 [2024-07-15 20:58:08.138048] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.934 [2024-07-15 20:58:08.138074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.934 [2024-07-15 20:58:08.138123] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.934 [2024-07-15 20:58:08.138142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.934 [2024-07-15 20:58:08.138191] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.934 [2024-07-15 20:58:08.138206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.934 [2024-07-15 20:58:08.138257] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446462598732840959 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.934 [2024-07-15 20:58:08.138272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.934 #24 NEW cov: 12237 ft: 14733 corp: 18/1853b lim: 120 exec/s: 24 rss: 71Mb L: 113/120 MS: 1 InsertRepeatedBytes- 00:07:40.934 [2024-07-15 20:58:08.188236] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.935 [2024-07-15 20:58:08.188263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.935 [2024-07-15 20:58:08.188326] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.935 [2024-07-15 20:58:08.188342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.935 [2024-07-15 20:58:08.188392] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.935 [2024-07-15 20:58:08.188411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.935 [2024-07-15 20:58:08.188465] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.935 [2024-07-15 20:58:08.188481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.935 #25 NEW cov: 12237 ft: 14741 corp: 19/1957b lim: 120 exec/s: 25 rss: 71Mb L: 104/120 MS: 1 CopyPart- 00:07:41.194 [2024-07-15 20:58:08.228359] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.194 [2024-07-15 20:58:08.228385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.194 [2024-07-15 20:58:08.228433] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.194 [2024-07-15 20:58:08.228453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.194 [2024-07-15 20:58:08.228507] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.194 [2024-07-15 20:58:08.228522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.194 [2024-07-15 20:58:08.228578] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:17361641481138401520 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.194 [2024-07-15 20:58:08.228593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.194 #26 NEW cov: 12237 ft: 14757 corp: 20/2074b lim: 120 exec/s: 26 rss: 71Mb L: 117/120 MS: 1 ChangeByte- 00:07:41.194 [2024-07-15 20:58:08.268459] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.194 [2024-07-15 20:58:08.268486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.194 [2024-07-15 20:58:08.268553] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.194 [2024-07-15 20:58:08.268568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.194 [2024-07-15 20:58:08.268622] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.194 [2024-07-15 20:58:08.268638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.194 [2024-07-15 20:58:08.268691] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446462598732840959 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.194 [2024-07-15 20:58:08.268705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.194 #27 NEW cov: 12237 ft: 14775 corp: 21/2180b lim: 120 exec/s: 27 rss: 71Mb L: 106/120 MS: 1 ChangeBinInt- 00:07:41.194 [2024-07-15 20:58:08.318572] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.194 [2024-07-15 20:58:08.318600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.194 [2024-07-15 20:58:08.318665] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446743171766419455 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.194 [2024-07-15 20:58:08.318684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.194 [2024-07-15 20:58:08.318734] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.194 [2024-07-15 20:58:08.318750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.194 [2024-07-15 20:58:08.318801] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446462598732840959 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.194 [2024-07-15 20:58:08.318817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.194 #28 NEW cov: 12237 ft: 14779 corp: 22/2286b lim: 120 exec/s: 28 rss: 71Mb L: 106/120 MS: 1 ChangeByte- 00:07:41.194 [2024-07-15 20:58:08.358717] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.194 [2024-07-15 20:58:08.358743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.194 [2024-07-15 20:58:08.358808] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.194 [2024-07-15 20:58:08.358823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.194 [2024-07-15 20:58:08.358874] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.194 [2024-07-15 20:58:08.358888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.194 [2024-07-15 20:58:08.358939] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.194 [2024-07-15 20:58:08.358954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.194 #29 NEW cov: 12237 ft: 14796 corp: 23/2391b lim: 120 exec/s: 29 rss: 71Mb L: 105/120 MS: 1 EraseBytes- 00:07:41.194 [2024-07-15 20:58:08.408848] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446743481188614143 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.194 [2024-07-15 20:58:08.408875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.194 [2024-07-15 20:58:08.408925] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.194 [2024-07-15 20:58:08.408941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.194 [2024-07-15 20:58:08.408993] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.194 [2024-07-15 20:58:08.409008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.194 [2024-07-15 20:58:08.409060] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:17361641481138401520 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.194 [2024-07-15 20:58:08.409075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.194 #30 NEW cov: 12237 ft: 14819 corp: 24/2509b lim: 120 exec/s: 30 rss: 71Mb L: 118/120 MS: 1 ChangeBinInt- 00:07:41.194 [2024-07-15 20:58:08.458960] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.194 [2024-07-15 20:58:08.458987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.194 [2024-07-15 20:58:08.459036] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.194 [2024-07-15 20:58:08.459054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.195 [2024-07-15 20:58:08.459104] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.195 [2024-07-15 20:58:08.459119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.195 [2024-07-15 20:58:08.459171] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:17361641481138401520 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.195 [2024-07-15 20:58:08.459187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.195 #31 NEW cov: 12237 ft: 14824 corp: 25/2626b lim: 120 exec/s: 31 rss: 71Mb L: 117/120 MS: 1 ChangeBinInt- 00:07:41.454 [2024-07-15 20:58:08.499102] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.454 [2024-07-15 20:58:08.499129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.454 [2024-07-15 20:58:08.499175] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.454 [2024-07-15 20:58:08.499197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.454 [2024-07-15 20:58:08.499247] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18442505391959965695 len:61681 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.454 [2024-07-15 20:58:08.499262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.454 [2024-07-15 20:58:08.499313] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.454 [2024-07-15 20:58:08.499329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.454 #32 NEW cov: 12237 ft: 14836 corp: 26/2727b lim: 120 exec/s: 32 rss: 71Mb L: 101/120 MS: 1 EraseBytes- 00:07:41.454 [2024-07-15 20:58:08.549233] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.454 [2024-07-15 20:58:08.549259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.454 [2024-07-15 20:58:08.549323] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.454 [2024-07-15 20:58:08.549340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.454 [2024-07-15 20:58:08.549389] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.454 [2024-07-15 20:58:08.549404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.454 [2024-07-15 20:58:08.549460] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.454 [2024-07-15 20:58:08.549478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.454 #33 NEW cov: 12237 ft: 14844 corp: 27/2835b lim: 120 exec/s: 33 rss: 71Mb L: 108/120 MS: 1 CMP- DE: "\002\000\000\000"- 00:07:41.454 [2024-07-15 20:58:08.589367] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.454 [2024-07-15 20:58:08.589392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.454 [2024-07-15 20:58:08.589462] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18374686479671623679 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.454 [2024-07-15 20:58:08.589478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.454 [2024-07-15 20:58:08.589539] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.454 [2024-07-15 20:58:08.589554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.454 [2024-07-15 20:58:08.589606] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:17361641481138401520 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.454 [2024-07-15 20:58:08.589622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.454 #34 NEW cov: 12237 ft: 14858 corp: 28/2953b lim: 120 exec/s: 34 rss: 71Mb L: 118/120 MS: 1 CrossOver- 00:07:41.454 [2024-07-15 20:58:08.629490] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446743189130838015 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.454 [2024-07-15 20:58:08.629526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.454 [2024-07-15 20:58:08.629595] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.454 [2024-07-15 20:58:08.629611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.454 [2024-07-15 20:58:08.629663] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.454 [2024-07-15 20:58:08.629678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.454 [2024-07-15 20:58:08.629727] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.454 [2024-07-15 20:58:08.629743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.454 #35 NEW cov: 12237 ft: 14872 corp: 29/3070b lim: 120 exec/s: 35 rss: 71Mb L: 117/120 MS: 1 ChangeByte- 00:07:41.454 [2024-07-15 20:58:08.669596] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.454 [2024-07-15 20:58:08.669621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.454 [2024-07-15 20:58:08.669690] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.454 [2024-07-15 20:58:08.669706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.454 [2024-07-15 20:58:08.669759] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.454 [2024-07-15 20:58:08.669775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.454 [2024-07-15 20:58:08.669826] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:17361641481138401520 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.454 [2024-07-15 20:58:08.669841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.454 #36 NEW cov: 12237 ft: 14882 corp: 30/3188b lim: 120 exec/s: 36 rss: 71Mb L: 118/120 MS: 1 CrossOver- 00:07:41.454 [2024-07-15 20:58:08.709555] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446743189130838015 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.454 [2024-07-15 20:58:08.709588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.454 [2024-07-15 20:58:08.709626] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.454 [2024-07-15 20:58:08.709641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.454 [2024-07-15 20:58:08.709691] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.454 [2024-07-15 20:58:08.709705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.454 #37 NEW cov: 12237 ft: 15185 corp: 31/3276b lim: 120 exec/s: 37 rss: 71Mb L: 88/120 MS: 1 CrossOver- 00:07:41.713 [2024-07-15 20:58:08.759518] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.713 [2024-07-15 20:58:08.759546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.713 [2024-07-15 20:58:08.759595] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.713 [2024-07-15 20:58:08.759617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.713 #38 NEW cov: 12237 ft: 15568 corp: 32/3346b lim: 120 exec/s: 38 rss: 71Mb L: 70/120 MS: 1 EraseBytes- 00:07:41.713 [2024-07-15 20:58:08.799968] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.713 [2024-07-15 20:58:08.799995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.714 [2024-07-15 20:58:08.800060] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.714 [2024-07-15 20:58:08.800076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.714 [2024-07-15 20:58:08.800128] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.714 [2024-07-15 20:58:08.800143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.714 [2024-07-15 20:58:08.800195] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:17361641481138401520 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.714 [2024-07-15 20:58:08.800210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.714 #39 NEW cov: 12237 ft: 15578 corp: 33/3463b lim: 120 exec/s: 39 rss: 71Mb L: 117/120 MS: 1 ChangeByte- 00:07:41.714 [2024-07-15 20:58:08.840098] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446743189130838015 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.714 [2024-07-15 20:58:08.840125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.714 [2024-07-15 20:58:08.840185] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.714 [2024-07-15 20:58:08.840201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.714 [2024-07-15 20:58:08.840250] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.714 [2024-07-15 20:58:08.840264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.714 [2024-07-15 20:58:08.840317] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.714 [2024-07-15 20:58:08.840333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.714 #40 NEW cov: 12237 ft: 15585 corp: 34/3580b lim: 120 exec/s: 40 rss: 71Mb L: 117/120 MS: 1 PersAutoDict- DE: "\016\000"- 00:07:41.714 [2024-07-15 20:58:08.880238] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.714 [2024-07-15 20:58:08.880264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.714 [2024-07-15 20:58:08.880315] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.714 [2024-07-15 20:58:08.880339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.714 [2024-07-15 20:58:08.880390] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.714 [2024-07-15 20:58:08.880406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.714 [2024-07-15 20:58:08.880463] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:17361641481138401520 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.714 [2024-07-15 20:58:08.880479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.714 #41 NEW cov: 12237 ft: 15601 corp: 35/3697b lim: 120 exec/s: 41 rss: 71Mb L: 117/120 MS: 1 ChangeBinInt- 00:07:41.714 [2024-07-15 20:58:08.920513] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.714 [2024-07-15 20:58:08.920539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.714 [2024-07-15 20:58:08.920591] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446743171766419455 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.714 [2024-07-15 20:58:08.920608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.714 [2024-07-15 20:58:08.920660] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.714 [2024-07-15 20:58:08.920675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.714 [2024-07-15 20:58:08.920731] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446462598732840959 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.714 [2024-07-15 20:58:08.920747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.714 #42 NEW cov: 12237 ft: 15655 corp: 36/3803b lim: 120 exec/s: 42 rss: 72Mb L: 106/120 MS: 1 ChangeByte- 00:07:41.714 [2024-07-15 20:58:08.970324] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.714 [2024-07-15 20:58:08.970350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.714 [2024-07-15 20:58:08.970405] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.714 [2024-07-15 20:58:08.970428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.714 [2024-07-15 20:58:08.970488] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.714 [2024-07-15 20:58:08.970504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.974 #43 NEW cov: 12237 ft: 15666 corp: 37/3882b lim: 120 exec/s: 43 rss: 72Mb L: 79/120 MS: 1 EraseBytes- 00:07:41.974 [2024-07-15 20:58:09.020622] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446743481188614143 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.974 [2024-07-15 20:58:09.020647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.974 [2024-07-15 20:58:09.020716] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.974 [2024-07-15 20:58:09.020732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.974 [2024-07-15 20:58:09.020783] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446462620207677439 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.974 [2024-07-15 20:58:09.020800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.974 [2024-07-15 20:58:09.020852] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:17361641481138401520 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.974 [2024-07-15 20:58:09.020868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.974 [2024-07-15 20:58:09.070737] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446743481188614143 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.974 [2024-07-15 20:58:09.070762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.974 [2024-07-15 20:58:09.070833] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.974 [2024-07-15 20:58:09.070848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.974 [2024-07-15 20:58:09.070901] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446462620207677439 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.974 [2024-07-15 20:58:09.070916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.974 [2024-07-15 20:58:09.070970] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:17361641481138401520 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.974 [2024-07-15 20:58:09.070987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.974 #45 NEW cov: 12237 ft: 15667 corp: 38/4000b lim: 120 exec/s: 45 rss: 72Mb L: 118/120 MS: 2 ChangeBinInt-ChangeByte- 00:07:41.974 [2024-07-15 20:58:09.111061] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133688 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.974 [2024-07-15 20:58:09.111087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.974 [2024-07-15 20:58:09.111141] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.974 [2024-07-15 20:58:09.111156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.974 [2024-07-15 20:58:09.111207] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.974 [2024-07-15 20:58:09.111223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.974 [2024-07-15 20:58:09.111273] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:17361641481138401520 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.974 [2024-07-15 20:58:09.111288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.974 #46 NEW cov: 12237 ft: 15754 corp: 39/4118b lim: 120 exec/s: 23 rss: 72Mb L: 118/120 MS: 1 ChangeBinInt- 00:07:41.974 #46 DONE cov: 12237 ft: 15754 corp: 39/4118b lim: 120 exec/s: 23 rss: 72Mb 00:07:41.974 ###### Recommended dictionary. ###### 00:07:41.974 "\016\000" # Uses: 1 00:07:41.974 "\002\000\000\000" # Uses: 0 00:07:41.974 ###### End of recommended dictionary. ###### 00:07:41.974 Done 46 runs in 2 second(s) 00:07:41.974 20:58:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:07:42.233 20:58:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:42.233 20:58:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:42.233 20:58:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:07:42.233 20:58:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:07:42.233 20:58:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:42.233 20:58:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:42.233 20:58:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:42.233 20:58:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:07:42.233 20:58:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:42.233 20:58:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:42.233 20:58:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:07:42.233 20:58:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:07:42.233 20:58:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:42.233 20:58:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:07:42.233 20:58:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:42.233 20:58:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:42.233 20:58:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:42.233 20:58:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:07:42.233 [2024-07-15 20:58:09.311486] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:42.233 [2024-07-15 20:58:09.311567] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid788655 ] 00:07:42.233 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.233 [2024-07-15 20:58:09.490305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.491 [2024-07-15 20:58:09.557152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.491 [2024-07-15 20:58:09.616333] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:42.491 [2024-07-15 20:58:09.632607] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:07:42.491 INFO: Running with entropic power schedule (0xFF, 100). 00:07:42.491 INFO: Seed: 771748924 00:07:42.491 INFO: Loaded 1 modules (358191 inline 8-bit counters): 358191 [0x29b254c, 0x2a09c7b), 00:07:42.491 INFO: Loaded 1 PC tables (358191 PCs): 358191 [0x2a09c80,0x2f80f70), 00:07:42.491 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:42.491 INFO: A corpus is not provided, starting from an empty corpus 00:07:42.491 #2 INITED exec/s: 0 rss: 64Mb 00:07:42.491 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:42.491 This may also happen if the target rejected all inputs we tried so far 00:07:42.491 [2024-07-15 20:58:09.702039] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:42.491 [2024-07-15 20:58:09.702081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.491 [2024-07-15 20:58:09.702184] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:42.491 [2024-07-15 20:58:09.702206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.491 [2024-07-15 20:58:09.702323] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:42.491 [2024-07-15 20:58:09.702346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:42.491 [2024-07-15 20:58:09.702469] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:42.491 [2024-07-15 20:58:09.702491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:42.491 [2024-07-15 20:58:09.702605] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:07:42.491 [2024-07-15 20:58:09.702627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:42.750 NEW_FUNC[1/696]: 0x4a15b0 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:07:42.750 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:42.750 #6 NEW cov: 11935 ft: 11936 corp: 2/101b lim: 100 exec/s: 0 rss: 70Mb L: 100/100 MS: 4 ChangeBit-CopyPart-ShuffleBytes-InsertRepeatedBytes- 00:07:43.010 [2024-07-15 20:58:10.052696] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.010 [2024-07-15 20:58:10.052763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.010 [2024-07-15 20:58:10.052908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:43.010 [2024-07-15 20:58:10.052944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.010 [2024-07-15 20:58:10.053090] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:43.010 [2024-07-15 20:58:10.053123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.010 #7 NEW cov: 12065 ft: 12781 corp: 3/169b lim: 100 exec/s: 0 rss: 70Mb L: 68/100 MS: 1 EraseBytes- 00:07:43.010 [2024-07-15 20:58:10.112605] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.010 [2024-07-15 20:58:10.112646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.010 [2024-07-15 20:58:10.112749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:43.010 [2024-07-15 20:58:10.112770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.010 [2024-07-15 20:58:10.112887] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:43.010 [2024-07-15 20:58:10.112909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.010 [2024-07-15 20:58:10.113021] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:43.010 [2024-07-15 20:58:10.113042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.010 #9 NEW cov: 12071 ft: 13115 corp: 4/251b lim: 100 exec/s: 0 rss: 70Mb L: 82/100 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:43.010 [2024-07-15 20:58:10.152821] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.010 [2024-07-15 20:58:10.152852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.010 [2024-07-15 20:58:10.152958] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:43.010 [2024-07-15 20:58:10.152978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.010 [2024-07-15 20:58:10.153095] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:43.010 [2024-07-15 20:58:10.153125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.010 #10 NEW cov: 12156 ft: 13335 corp: 5/314b lim: 100 exec/s: 0 rss: 70Mb L: 63/100 MS: 1 EraseBytes- 00:07:43.010 [2024-07-15 20:58:10.192927] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.010 [2024-07-15 20:58:10.192961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.010 [2024-07-15 20:58:10.193057] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:43.010 [2024-07-15 20:58:10.193078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.010 [2024-07-15 20:58:10.193195] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:43.010 [2024-07-15 20:58:10.193217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.010 #11 NEW cov: 12156 ft: 13403 corp: 6/391b lim: 100 exec/s: 0 rss: 71Mb L: 77/100 MS: 1 EraseBytes- 00:07:43.010 [2024-07-15 20:58:10.243539] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.010 [2024-07-15 20:58:10.243568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.010 [2024-07-15 20:58:10.243646] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:43.010 [2024-07-15 20:58:10.243668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.010 [2024-07-15 20:58:10.243781] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:43.010 [2024-07-15 20:58:10.243798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.010 [2024-07-15 20:58:10.243906] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:43.010 [2024-07-15 20:58:10.243926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.010 [2024-07-15 20:58:10.244035] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:07:43.010 [2024-07-15 20:58:10.244055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:43.010 #12 NEW cov: 12156 ft: 13602 corp: 7/491b lim: 100 exec/s: 0 rss: 71Mb L: 100/100 MS: 1 ChangeByte- 00:07:43.010 [2024-07-15 20:58:10.283585] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.010 [2024-07-15 20:58:10.283612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.010 [2024-07-15 20:58:10.283693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:43.010 [2024-07-15 20:58:10.283715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.010 [2024-07-15 20:58:10.283829] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:43.010 [2024-07-15 20:58:10.283847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.010 [2024-07-15 20:58:10.283951] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:43.010 [2024-07-15 20:58:10.283973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.010 [2024-07-15 20:58:10.284091] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:07:43.010 [2024-07-15 20:58:10.284114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:43.269 #13 NEW cov: 12156 ft: 13687 corp: 8/591b lim: 100 exec/s: 0 rss: 71Mb L: 100/100 MS: 1 ShuffleBytes- 00:07:43.269 [2024-07-15 20:58:10.323684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.269 [2024-07-15 20:58:10.323716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.269 [2024-07-15 20:58:10.323808] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:43.269 [2024-07-15 20:58:10.323825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.269 [2024-07-15 20:58:10.323937] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:43.269 [2024-07-15 20:58:10.323959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.269 [2024-07-15 20:58:10.324070] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:43.269 [2024-07-15 20:58:10.324093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.269 [2024-07-15 20:58:10.324204] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:07:43.269 [2024-07-15 20:58:10.324227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:43.269 #14 NEW cov: 12156 ft: 13791 corp: 9/691b lim: 100 exec/s: 0 rss: 71Mb L: 100/100 MS: 1 ChangeBinInt- 00:07:43.269 [2024-07-15 20:58:10.373462] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.269 [2024-07-15 20:58:10.373491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.269 [2024-07-15 20:58:10.373582] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:43.269 [2024-07-15 20:58:10.373605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.269 [2024-07-15 20:58:10.373719] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:43.269 [2024-07-15 20:58:10.373739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.269 #15 NEW cov: 12156 ft: 13896 corp: 10/760b lim: 100 exec/s: 0 rss: 71Mb L: 69/100 MS: 1 InsertByte- 00:07:43.269 [2024-07-15 20:58:10.424019] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.269 [2024-07-15 20:58:10.424048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.269 [2024-07-15 20:58:10.424148] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:43.269 [2024-07-15 20:58:10.424172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.269 [2024-07-15 20:58:10.424285] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:43.269 [2024-07-15 20:58:10.424310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.269 [2024-07-15 20:58:10.424421] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:43.269 [2024-07-15 20:58:10.424445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.269 [2024-07-15 20:58:10.424565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:07:43.269 [2024-07-15 20:58:10.424586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:43.269 #16 NEW cov: 12156 ft: 13922 corp: 11/860b lim: 100 exec/s: 0 rss: 71Mb L: 100/100 MS: 1 ChangeByte- 00:07:43.269 [2024-07-15 20:58:10.474150] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.269 [2024-07-15 20:58:10.474177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.269 [2024-07-15 20:58:10.474269] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:43.269 [2024-07-15 20:58:10.474288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.269 [2024-07-15 20:58:10.474400] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:43.269 [2024-07-15 20:58:10.474423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.269 [2024-07-15 20:58:10.474541] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:43.269 [2024-07-15 20:58:10.474561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.269 [2024-07-15 20:58:10.474675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:07:43.269 [2024-07-15 20:58:10.474696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:43.269 #17 NEW cov: 12156 ft: 13937 corp: 12/960b lim: 100 exec/s: 0 rss: 71Mb L: 100/100 MS: 1 InsertRepeatedBytes- 00:07:43.269 [2024-07-15 20:58:10.524053] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.269 [2024-07-15 20:58:10.524086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.269 [2024-07-15 20:58:10.524178] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:43.269 [2024-07-15 20:58:10.524197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.269 [2024-07-15 20:58:10.524321] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:43.269 [2024-07-15 20:58:10.524341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.269 [2024-07-15 20:58:10.524455] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:43.269 [2024-07-15 20:58:10.524475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.269 #18 NEW cov: 12156 ft: 13977 corp: 13/1042b lim: 100 exec/s: 0 rss: 71Mb L: 82/100 MS: 1 EraseBytes- 00:07:43.528 [2024-07-15 20:58:10.574163] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.528 [2024-07-15 20:58:10.574193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.528 [2024-07-15 20:58:10.574300] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:43.528 [2024-07-15 20:58:10.574319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.528 [2024-07-15 20:58:10.574431] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:43.528 [2024-07-15 20:58:10.574456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.528 [2024-07-15 20:58:10.574576] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:43.528 [2024-07-15 20:58:10.574601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.528 NEW_FUNC[1/1]: 0x1a7d240 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:43.528 #24 NEW cov: 12179 ft: 14018 corp: 14/1124b lim: 100 exec/s: 0 rss: 72Mb L: 82/100 MS: 1 ChangeBinInt- 00:07:43.528 [2024-07-15 20:58:10.613803] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.528 [2024-07-15 20:58:10.613827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.528 #26 NEW cov: 12179 ft: 14394 corp: 15/1150b lim: 100 exec/s: 0 rss: 72Mb L: 26/100 MS: 2 ChangeBit-CrossOver- 00:07:43.528 [2024-07-15 20:58:10.654415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.528 [2024-07-15 20:58:10.654447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.528 [2024-07-15 20:58:10.654538] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:43.528 [2024-07-15 20:58:10.654561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.528 [2024-07-15 20:58:10.654681] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:43.528 [2024-07-15 20:58:10.654699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.528 [2024-07-15 20:58:10.654810] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:43.528 [2024-07-15 20:58:10.654830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.528 #27 NEW cov: 12179 ft: 14424 corp: 16/1249b lim: 100 exec/s: 27 rss: 72Mb L: 99/100 MS: 1 CopyPart- 00:07:43.528 [2024-07-15 20:58:10.704305] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.528 [2024-07-15 20:58:10.704335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.528 [2024-07-15 20:58:10.704449] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:43.528 [2024-07-15 20:58:10.704468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.528 [2024-07-15 20:58:10.704600] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:43.528 [2024-07-15 20:58:10.704623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.528 #28 NEW cov: 12179 ft: 14432 corp: 17/1312b lim: 100 exec/s: 28 rss: 72Mb L: 63/100 MS: 1 ChangeBit- 00:07:43.528 [2024-07-15 20:58:10.744652] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.528 [2024-07-15 20:58:10.744679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.528 [2024-07-15 20:58:10.744764] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:43.528 [2024-07-15 20:58:10.744787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.528 [2024-07-15 20:58:10.744901] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:43.528 [2024-07-15 20:58:10.744922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.528 [2024-07-15 20:58:10.745046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:43.528 [2024-07-15 20:58:10.745065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.528 #29 NEW cov: 12179 ft: 14441 corp: 18/1404b lim: 100 exec/s: 29 rss: 72Mb L: 92/100 MS: 1 InsertRepeatedBytes- 00:07:43.528 [2024-07-15 20:58:10.794732] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.528 [2024-07-15 20:58:10.794760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.528 [2024-07-15 20:58:10.794879] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:43.528 [2024-07-15 20:58:10.794903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.528 [2024-07-15 20:58:10.795020] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:43.528 [2024-07-15 20:58:10.795041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.528 #30 NEW cov: 12179 ft: 14449 corp: 19/1482b lim: 100 exec/s: 30 rss: 72Mb L: 78/100 MS: 1 InsertRepeatedBytes- 00:07:43.787 [2024-07-15 20:58:10.834706] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.787 [2024-07-15 20:58:10.834736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.787 [2024-07-15 20:58:10.834834] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:43.787 [2024-07-15 20:58:10.834860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.787 #31 NEW cov: 12179 ft: 14700 corp: 20/1526b lim: 100 exec/s: 31 rss: 72Mb L: 44/100 MS: 1 EraseBytes- 00:07:43.787 [2024-07-15 20:58:10.884728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.787 [2024-07-15 20:58:10.884755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.787 [2024-07-15 20:58:10.884869] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:43.787 [2024-07-15 20:58:10.884891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.787 #32 NEW cov: 12179 ft: 14754 corp: 21/1578b lim: 100 exec/s: 32 rss: 72Mb L: 52/100 MS: 1 CMP- DE: "\005\000\000\000\000\000\000\000"- 00:07:43.787 [2024-07-15 20:58:10.935333] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.787 [2024-07-15 20:58:10.935362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.787 [2024-07-15 20:58:10.935432] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:43.787 [2024-07-15 20:58:10.935458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.787 [2024-07-15 20:58:10.935568] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:43.787 [2024-07-15 20:58:10.935593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.787 [2024-07-15 20:58:10.935711] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:43.787 [2024-07-15 20:58:10.935734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.787 #33 NEW cov: 12179 ft: 14768 corp: 22/1661b lim: 100 exec/s: 33 rss: 72Mb L: 83/100 MS: 1 InsertByte- 00:07:43.787 [2024-07-15 20:58:10.975604] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.787 [2024-07-15 20:58:10.975636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.787 [2024-07-15 20:58:10.975723] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:43.787 [2024-07-15 20:58:10.975747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.787 [2024-07-15 20:58:10.975858] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:43.787 [2024-07-15 20:58:10.975883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.787 [2024-07-15 20:58:10.975994] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:43.787 [2024-07-15 20:58:10.976016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.787 [2024-07-15 20:58:10.976134] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:07:43.787 [2024-07-15 20:58:10.976158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:43.787 #34 NEW cov: 12179 ft: 14801 corp: 23/1761b lim: 100 exec/s: 34 rss: 72Mb L: 100/100 MS: 1 ShuffleBytes- 00:07:43.787 [2024-07-15 20:58:11.015117] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.787 [2024-07-15 20:58:11.015146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.787 [2024-07-15 20:58:11.015274] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:43.787 [2024-07-15 20:58:11.015294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.787 #35 NEW cov: 12179 ft: 14815 corp: 24/1806b lim: 100 exec/s: 35 rss: 72Mb L: 45/100 MS: 1 EraseBytes- 00:07:43.787 [2024-07-15 20:58:11.065341] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:43.787 [2024-07-15 20:58:11.065377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.787 [2024-07-15 20:58:11.065494] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:43.788 [2024-07-15 20:58:11.065518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.046 #36 NEW cov: 12179 ft: 14838 corp: 25/1858b lim: 100 exec/s: 36 rss: 72Mb L: 52/100 MS: 1 CrossOver- 00:07:44.046 [2024-07-15 20:58:11.115702] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:44.046 [2024-07-15 20:58:11.115739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.046 [2024-07-15 20:58:11.115860] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:44.046 [2024-07-15 20:58:11.115881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.046 [2024-07-15 20:58:11.116008] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:44.046 [2024-07-15 20:58:11.116032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.046 #37 NEW cov: 12179 ft: 14904 corp: 26/1935b lim: 100 exec/s: 37 rss: 72Mb L: 77/100 MS: 1 ChangeBinInt- 00:07:44.046 [2024-07-15 20:58:11.155960] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:44.046 [2024-07-15 20:58:11.155993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.046 [2024-07-15 20:58:11.156067] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:44.046 [2024-07-15 20:58:11.156089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.046 [2024-07-15 20:58:11.156207] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:44.046 [2024-07-15 20:58:11.156231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.046 [2024-07-15 20:58:11.156353] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:44.046 [2024-07-15 20:58:11.156376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.047 [2024-07-15 20:58:11.156504] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:07:44.047 [2024-07-15 20:58:11.156527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:44.047 #38 NEW cov: 12179 ft: 14910 corp: 27/2035b lim: 100 exec/s: 38 rss: 73Mb L: 100/100 MS: 1 ChangeByte- 00:07:44.047 [2024-07-15 20:58:11.206296] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:44.047 [2024-07-15 20:58:11.206327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.047 [2024-07-15 20:58:11.206400] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:44.047 [2024-07-15 20:58:11.206421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.047 [2024-07-15 20:58:11.206549] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:44.047 [2024-07-15 20:58:11.206570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.047 [2024-07-15 20:58:11.206688] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:44.047 [2024-07-15 20:58:11.206709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.047 [2024-07-15 20:58:11.206836] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:07:44.047 [2024-07-15 20:58:11.206859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:44.047 #39 NEW cov: 12179 ft: 14929 corp: 28/2135b lim: 100 exec/s: 39 rss: 73Mb L: 100/100 MS: 1 CrossOver- 00:07:44.047 [2024-07-15 20:58:11.256395] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:44.047 [2024-07-15 20:58:11.256425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.047 [2024-07-15 20:58:11.256509] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:44.047 [2024-07-15 20:58:11.256531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.047 [2024-07-15 20:58:11.256646] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:44.047 [2024-07-15 20:58:11.256669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.047 [2024-07-15 20:58:11.256787] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:44.047 [2024-07-15 20:58:11.256810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.047 [2024-07-15 20:58:11.256927] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:07:44.047 [2024-07-15 20:58:11.256950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:44.047 #40 NEW cov: 12179 ft: 14931 corp: 29/2235b lim: 100 exec/s: 40 rss: 73Mb L: 100/100 MS: 1 ChangeByte- 00:07:44.047 [2024-07-15 20:58:11.306298] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:44.047 [2024-07-15 20:58:11.306330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.047 [2024-07-15 20:58:11.306416] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:44.047 [2024-07-15 20:58:11.306435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.047 [2024-07-15 20:58:11.306552] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:44.047 [2024-07-15 20:58:11.306575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.047 [2024-07-15 20:58:11.306701] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:44.047 [2024-07-15 20:58:11.306720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.306 #41 NEW cov: 12179 ft: 14946 corp: 30/2334b lim: 100 exec/s: 41 rss: 73Mb L: 99/100 MS: 1 EraseBytes- 00:07:44.306 [2024-07-15 20:58:11.356235] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:44.306 [2024-07-15 20:58:11.356263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.306 [2024-07-15 20:58:11.356354] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:44.306 [2024-07-15 20:58:11.356376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.306 [2024-07-15 20:58:11.356492] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:44.306 [2024-07-15 20:58:11.356513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.306 #42 NEW cov: 12179 ft: 14954 corp: 31/2411b lim: 100 exec/s: 42 rss: 73Mb L: 77/100 MS: 1 ChangeByte- 00:07:44.306 [2024-07-15 20:58:11.396596] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:44.306 [2024-07-15 20:58:11.396626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.306 [2024-07-15 20:58:11.396717] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:44.306 [2024-07-15 20:58:11.396736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.306 [2024-07-15 20:58:11.396844] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:44.306 [2024-07-15 20:58:11.396869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.306 [2024-07-15 20:58:11.396985] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:44.306 [2024-07-15 20:58:11.397006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.306 #43 NEW cov: 12179 ft: 14969 corp: 32/2493b lim: 100 exec/s: 43 rss: 73Mb L: 82/100 MS: 1 ChangeBit- 00:07:44.306 [2024-07-15 20:58:11.446727] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:44.306 [2024-07-15 20:58:11.446756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.306 [2024-07-15 20:58:11.446864] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:44.306 [2024-07-15 20:58:11.446888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.306 [2024-07-15 20:58:11.447000] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:44.306 [2024-07-15 20:58:11.447023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.306 [2024-07-15 20:58:11.447143] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:44.306 [2024-07-15 20:58:11.447165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.307 #44 NEW cov: 12179 ft: 14971 corp: 33/2591b lim: 100 exec/s: 44 rss: 73Mb L: 98/100 MS: 1 CopyPart- 00:07:44.307 [2024-07-15 20:58:11.486910] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:44.307 [2024-07-15 20:58:11.486941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.307 [2024-07-15 20:58:11.487029] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:44.307 [2024-07-15 20:58:11.487055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.307 [2024-07-15 20:58:11.487169] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:44.307 [2024-07-15 20:58:11.487190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.307 [2024-07-15 20:58:11.487311] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:44.307 [2024-07-15 20:58:11.487335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.307 #50 NEW cov: 12179 ft: 15006 corp: 34/2673b lim: 100 exec/s: 50 rss: 73Mb L: 82/100 MS: 1 ChangeBit- 00:07:44.307 [2024-07-15 20:58:11.527215] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:44.307 [2024-07-15 20:58:11.527244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.307 [2024-07-15 20:58:11.527318] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:44.307 [2024-07-15 20:58:11.527342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.307 [2024-07-15 20:58:11.527471] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:44.307 [2024-07-15 20:58:11.527495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.307 [2024-07-15 20:58:11.527611] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:44.307 [2024-07-15 20:58:11.527630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.307 [2024-07-15 20:58:11.527742] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:07:44.307 [2024-07-15 20:58:11.527766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:44.307 #51 NEW cov: 12179 ft: 15030 corp: 35/2773b lim: 100 exec/s: 51 rss: 73Mb L: 100/100 MS: 1 ShuffleBytes- 00:07:44.307 [2024-07-15 20:58:11.566931] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:44.307 [2024-07-15 20:58:11.566962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.307 [2024-07-15 20:58:11.567050] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:44.307 [2024-07-15 20:58:11.567071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.307 [2024-07-15 20:58:11.567187] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:44.307 [2024-07-15 20:58:11.567214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.307 #52 NEW cov: 12179 ft: 15040 corp: 36/2850b lim: 100 exec/s: 52 rss: 73Mb L: 77/100 MS: 1 ChangeBinInt- 00:07:44.566 [2024-07-15 20:58:11.607433] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:44.566 [2024-07-15 20:58:11.607468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.566 [2024-07-15 20:58:11.607547] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:44.566 [2024-07-15 20:58:11.607570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.566 [2024-07-15 20:58:11.607680] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:44.566 [2024-07-15 20:58:11.607705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.566 [2024-07-15 20:58:11.607822] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:44.566 [2024-07-15 20:58:11.607845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.566 [2024-07-15 20:58:11.607954] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:07:44.566 [2024-07-15 20:58:11.607977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:44.566 #53 NEW cov: 12179 ft: 15049 corp: 37/2950b lim: 100 exec/s: 53 rss: 73Mb L: 100/100 MS: 1 ChangeBit- 00:07:44.566 [2024-07-15 20:58:11.647621] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:44.566 [2024-07-15 20:58:11.647654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.566 [2024-07-15 20:58:11.647751] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:44.566 [2024-07-15 20:58:11.647772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.566 [2024-07-15 20:58:11.647888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:44.566 [2024-07-15 20:58:11.647924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.566 [2024-07-15 20:58:11.648035] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:44.566 [2024-07-15 20:58:11.648054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.566 [2024-07-15 20:58:11.648177] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:07:44.566 [2024-07-15 20:58:11.648196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:44.566 #54 NEW cov: 12179 ft: 15059 corp: 38/3050b lim: 100 exec/s: 54 rss: 73Mb L: 100/100 MS: 1 ChangeByte- 00:07:44.566 [2024-07-15 20:58:11.687695] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:44.566 [2024-07-15 20:58:11.687725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.566 [2024-07-15 20:58:11.687816] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:44.566 [2024-07-15 20:58:11.687841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.566 [2024-07-15 20:58:11.687953] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:44.566 [2024-07-15 20:58:11.687975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.566 [2024-07-15 20:58:11.688090] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:44.566 [2024-07-15 20:58:11.688111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.566 [2024-07-15 20:58:11.688223] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:07:44.566 [2024-07-15 20:58:11.688244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:44.566 #55 NEW cov: 12179 ft: 15060 corp: 39/3150b lim: 100 exec/s: 27 rss: 73Mb L: 100/100 MS: 1 CopyPart- 00:07:44.566 #55 DONE cov: 12179 ft: 15060 corp: 39/3150b lim: 100 exec/s: 27 rss: 73Mb 00:07:44.566 ###### Recommended dictionary. ###### 00:07:44.566 "\005\000\000\000\000\000\000\000" # Uses: 1 00:07:44.566 ###### End of recommended dictionary. ###### 00:07:44.566 Done 55 runs in 2 second(s) 00:07:44.566 20:58:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:07:44.566 20:58:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:44.566 20:58:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:44.566 20:58:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:07:44.567 20:58:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:07:44.567 20:58:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:44.567 20:58:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:44.567 20:58:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:44.567 20:58:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:07:44.567 20:58:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:44.567 20:58:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:44.567 20:58:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:07:44.567 20:58:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:07:44.567 20:58:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:44.567 20:58:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:07:44.567 20:58:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:44.825 20:58:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:44.825 20:58:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:44.826 20:58:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:07:44.826 [2024-07-15 20:58:11.891656] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:44.826 [2024-07-15 20:58:11.891725] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid789190 ] 00:07:44.826 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.826 [2024-07-15 20:58:12.068192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.086 [2024-07-15 20:58:12.134182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.086 [2024-07-15 20:58:12.193268] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:45.086 [2024-07-15 20:58:12.209540] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:07:45.086 INFO: Running with entropic power schedule (0xFF, 100). 00:07:45.086 INFO: Seed: 3350737617 00:07:45.086 INFO: Loaded 1 modules (358191 inline 8-bit counters): 358191 [0x29b254c, 0x2a09c7b), 00:07:45.086 INFO: Loaded 1 PC tables (358191 PCs): 358191 [0x2a09c80,0x2f80f70), 00:07:45.086 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:45.086 INFO: A corpus is not provided, starting from an empty corpus 00:07:45.086 #2 INITED exec/s: 0 rss: 63Mb 00:07:45.086 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:45.086 This may also happen if the target rejected all inputs we tried so far 00:07:45.086 [2024-07-15 20:58:12.254312] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:940422246894996749 len:3342 00:07:45.086 [2024-07-15 20:58:12.254345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.086 [2024-07-15 20:58:12.254394] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:940422246894996749 len:3342 00:07:45.086 [2024-07-15 20:58:12.254416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.086 [2024-07-15 20:58:12.254453] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:940422246894996749 len:3342 00:07:45.086 [2024-07-15 20:58:12.254475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:45.086 [2024-07-15 20:58:12.254503] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:940422246894996749 len:3342 00:07:45.086 [2024-07-15 20:58:12.254520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:45.086 [2024-07-15 20:58:12.254547] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:940422246894996749 len:3397 00:07:45.086 [2024-07-15 20:58:12.254564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:45.345 NEW_FUNC[1/696]: 0x4a4570 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:07:45.345 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:45.345 #9 NEW cov: 11913 ft: 11914 corp: 2/51b lim: 50 exec/s: 0 rss: 70Mb L: 50/50 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:45.345 [2024-07-15 20:58:12.605097] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1665666365006592 len:60139 00:07:45.345 [2024-07-15 20:58:12.605136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.345 [2024-07-15 20:58:12.605187] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16927600444109941482 len:60139 00:07:45.345 [2024-07-15 20:58:12.605207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.345 [2024-07-15 20:58:12.605237] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16927600444109941482 len:60139 00:07:45.345 [2024-07-15 20:58:12.605254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:45.606 #13 NEW cov: 12043 ft: 12678 corp: 3/85b lim: 50 exec/s: 0 rss: 70Mb L: 34/50 MS: 4 ChangeByte-CMP-InsertByte-InsertRepeatedBytes- DE: "\000\000\000\005"- 00:07:45.606 [2024-07-15 20:58:12.665074] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:940422246894996749 len:3342 00:07:45.606 [2024-07-15 20:58:12.665105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.606 [2024-07-15 20:58:12.665154] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:940418948360113421 len:3342 00:07:45.606 [2024-07-15 20:58:12.665175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.606 #14 NEW cov: 12049 ft: 13211 corp: 4/106b lim: 50 exec/s: 0 rss: 70Mb L: 21/50 MS: 1 CrossOver- 00:07:45.606 [2024-07-15 20:58:12.745245] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:940422246844665101 len:3342 00:07:45.606 [2024-07-15 20:58:12.745275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.606 [2024-07-15 20:58:12.745323] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:940422234010094861 len:3342 00:07:45.606 [2024-07-15 20:58:12.745346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.606 #15 NEW cov: 12134 ft: 13514 corp: 5/128b lim: 50 exec/s: 0 rss: 70Mb L: 22/50 MS: 1 CrossOver- 00:07:45.606 [2024-07-15 20:58:12.805587] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:940422246894996749 len:3342 00:07:45.606 [2024-07-15 20:58:12.805621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.606 [2024-07-15 20:58:12.805653] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:940422246894996749 len:3342 00:07:45.606 [2024-07-15 20:58:12.805671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.606 [2024-07-15 20:58:12.805709] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:940422246894996749 len:3342 00:07:45.606 [2024-07-15 20:58:12.805725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:45.606 [2024-07-15 20:58:12.805752] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:940422246894996749 len:3342 00:07:45.606 [2024-07-15 20:58:12.805768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:45.606 [2024-07-15 20:58:12.805796] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:940422246894996749 len:3397 00:07:45.606 [2024-07-15 20:58:12.805812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:45.606 #16 NEW cov: 12134 ft: 13620 corp: 6/178b lim: 50 exec/s: 0 rss: 70Mb L: 50/50 MS: 1 CopyPart- 00:07:45.606 [2024-07-15 20:58:12.865638] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17511661017581495040 len:60139 00:07:45.606 [2024-07-15 20:58:12.865668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.606 [2024-07-15 20:58:12.865715] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16927600444109941482 len:60139 00:07:45.606 [2024-07-15 20:58:12.865738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.606 [2024-07-15 20:58:12.865768] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16927600444109941482 len:60139 00:07:45.606 [2024-07-15 20:58:12.865784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:45.866 #17 NEW cov: 12134 ft: 13753 corp: 7/212b lim: 50 exec/s: 0 rss: 70Mb L: 34/50 MS: 1 ChangeByte- 00:07:45.866 [2024-07-15 20:58:12.945804] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:940422246844665101 len:3342 00:07:45.866 [2024-07-15 20:58:12.945834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.866 [2024-07-15 20:58:12.945882] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:940418948360113421 len:3342 00:07:45.866 [2024-07-15 20:58:12.945906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.866 #18 NEW cov: 12134 ft: 13814 corp: 8/233b lim: 50 exec/s: 0 rss: 70Mb L: 21/50 MS: 1 EraseBytes- 00:07:45.866 [2024-07-15 20:58:13.026005] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:940422246844665101 len:3342 00:07:45.866 [2024-07-15 20:58:13.026035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.866 [2024-07-15 20:58:13.026084] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:939577821964864781 len:3342 00:07:45.866 [2024-07-15 20:58:13.026106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.866 #19 NEW cov: 12134 ft: 13896 corp: 9/253b lim: 50 exec/s: 0 rss: 71Mb L: 20/50 MS: 1 EraseBytes- 00:07:45.866 [2024-07-15 20:58:13.106188] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:944925846472035597 len:3342 00:07:45.866 [2024-07-15 20:58:13.106217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.866 [2024-07-15 20:58:13.106264] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:940418948360113421 len:3342 00:07:45.866 [2024-07-15 20:58:13.106288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.866 NEW_FUNC[1/1]: 0x1a7d240 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:45.866 #20 NEW cov: 12151 ft: 13990 corp: 10/274b lim: 50 exec/s: 0 rss: 71Mb L: 21/50 MS: 1 ChangeBit- 00:07:45.866 [2024-07-15 20:58:13.156496] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:940422246894996749 len:3342 00:07:45.866 [2024-07-15 20:58:13.156527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.866 [2024-07-15 20:58:13.156573] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:940422246894996749 len:3342 00:07:45.866 [2024-07-15 20:58:13.156597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.866 [2024-07-15 20:58:13.156627] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:940422246894996749 len:3342 00:07:45.866 [2024-07-15 20:58:13.156643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:45.866 [2024-07-15 20:58:13.156671] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:940422246894996772 len:3342 00:07:45.866 [2024-07-15 20:58:13.156687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:45.866 [2024-07-15 20:58:13.156715] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:940422246894996749 len:3397 00:07:45.866 [2024-07-15 20:58:13.156731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:46.125 #21 NEW cov: 12151 ft: 14095 corp: 11/324b lim: 50 exec/s: 0 rss: 71Mb L: 50/50 MS: 1 ChangeByte- 00:07:46.125 [2024-07-15 20:58:13.206549] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17511661017581495040 len:60139 00:07:46.125 [2024-07-15 20:58:13.206579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.125 [2024-07-15 20:58:13.206609] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16861477008816401130 len:1515 00:07:46.126 [2024-07-15 20:58:13.206627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.126 [2024-07-15 20:58:13.206659] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16927600444109941482 len:60139 00:07:46.126 [2024-07-15 20:58:13.206675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.126 [2024-07-15 20:58:13.286743] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17511661017581495040 len:60139 00:07:46.126 [2024-07-15 20:58:13.286771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.126 [2024-07-15 20:58:13.286818] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16861477008816401130 len:1515 00:07:46.126 [2024-07-15 20:58:13.286848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.126 [2024-07-15 20:58:13.286877] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16927600444094545925 len:60139 00:07:46.126 [2024-07-15 20:58:13.286893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.126 #23 NEW cov: 12151 ft: 14123 corp: 12/358b lim: 50 exec/s: 23 rss: 71Mb L: 34/50 MS: 2 PersAutoDict-CopyPart- DE: "\000\000\000\005"- 00:07:46.126 [2024-07-15 20:58:13.336871] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:940422246894996749 len:3342 00:07:46.126 [2024-07-15 20:58:13.336901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.126 [2024-07-15 20:58:13.336947] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:940418948360113421 len:3329 00:07:46.126 [2024-07-15 20:58:13.336972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.126 [2024-07-15 20:58:13.337002] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:46.126 [2024-07-15 20:58:13.337018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.126 [2024-07-15 20:58:13.337046] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:07:46.126 [2024-07-15 20:58:13.337062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:46.126 #24 NEW cov: 12151 ft: 14181 corp: 13/407b lim: 50 exec/s: 24 rss: 71Mb L: 49/50 MS: 1 InsertRepeatedBytes- 00:07:46.126 [2024-07-15 20:58:13.417026] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:940422246894996749 len:1 00:07:46.126 [2024-07-15 20:58:13.417057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.126 [2024-07-15 20:58:13.417105] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:940422246676368653 len:3342 00:07:46.126 [2024-07-15 20:58:13.417131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.385 #25 NEW cov: 12151 ft: 14275 corp: 14/432b lim: 50 exec/s: 25 rss: 71Mb L: 25/50 MS: 1 PersAutoDict- DE: "\000\000\000\005"- 00:07:46.385 [2024-07-15 20:58:13.477280] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:940422246894996749 len:3342 00:07:46.385 [2024-07-15 20:58:13.477311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.385 [2024-07-15 20:58:13.477357] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:940422246894996749 len:3342 00:07:46.385 [2024-07-15 20:58:13.477383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.385 [2024-07-15 20:58:13.477412] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:940422246894996749 len:3342 00:07:46.385 [2024-07-15 20:58:13.477428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.385 [2024-07-15 20:58:13.477463] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:940422246894996749 len:3342 00:07:46.385 [2024-07-15 20:58:13.477480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:46.385 [2024-07-15 20:58:13.477507] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:940422246894996749 len:3397 00:07:46.385 [2024-07-15 20:58:13.477528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:46.385 #26 NEW cov: 12151 ft: 14295 corp: 15/482b lim: 50 exec/s: 26 rss: 71Mb L: 50/50 MS: 1 ShuffleBytes- 00:07:46.385 [2024-07-15 20:58:13.527307] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:940422246894996749 len:3342 00:07:46.385 [2024-07-15 20:58:13.527337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.385 [2024-07-15 20:58:13.527369] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:940418948360113421 len:3342 00:07:46.385 [2024-07-15 20:58:13.527386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.385 #32 NEW cov: 12151 ft: 14329 corp: 16/503b lim: 50 exec/s: 32 rss: 71Mb L: 21/50 MS: 1 ShuffleBytes- 00:07:46.385 [2024-07-15 20:58:13.577436] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:940422246844665101 len:3567 00:07:46.385 [2024-07-15 20:58:13.577474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.385 [2024-07-15 20:58:13.577506] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:940418948360113421 len:3342 00:07:46.385 [2024-07-15 20:58:13.577523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.385 #33 NEW cov: 12151 ft: 14356 corp: 17/524b lim: 50 exec/s: 33 rss: 71Mb L: 21/50 MS: 1 InsertByte- 00:07:46.385 [2024-07-15 20:58:13.657678] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1665666365006592 len:60139 00:07:46.385 [2024-07-15 20:58:13.657707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.385 [2024-07-15 20:58:13.657753] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16927600444109941482 len:60139 00:07:46.385 [2024-07-15 20:58:13.657770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.385 [2024-07-15 20:58:13.657799] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2516081636524354048 len:60139 00:07:46.385 [2024-07-15 20:58:13.657815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.645 #34 NEW cov: 12151 ft: 14373 corp: 18/558b lim: 50 exec/s: 34 rss: 71Mb L: 34/50 MS: 1 ChangeBinInt- 00:07:46.645 [2024-07-15 20:58:13.707795] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:940422191010090253 len:14 00:07:46.645 [2024-07-15 20:58:13.707825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.645 [2024-07-15 20:58:13.707857] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:940422234010094861 len:3342 00:07:46.645 [2024-07-15 20:58:13.707875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.645 #35 NEW cov: 12151 ft: 14380 corp: 19/580b lim: 50 exec/s: 35 rss: 71Mb L: 22/50 MS: 1 CMP- DE: "\000\000"- 00:07:46.645 [2024-07-15 20:58:13.757923] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:944925846472035597 len:3342 00:07:46.645 [2024-07-15 20:58:13.757951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.645 [2024-07-15 20:58:13.757997] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2093343751501843725 len:3342 00:07:46.645 [2024-07-15 20:58:13.758027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.645 [2024-07-15 20:58:13.758056] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:939577821964864781 len:3342 00:07:46.645 [2024-07-15 20:58:13.758073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.645 #36 NEW cov: 12151 ft: 14403 corp: 20/616b lim: 50 exec/s: 36 rss: 71Mb L: 36/50 MS: 1 CopyPart- 00:07:46.645 [2024-07-15 20:58:13.838147] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15625477329605430744 len:55513 00:07:46.645 [2024-07-15 20:58:13.838176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.645 [2024-07-15 20:58:13.838222] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:15625477333024561368 len:55513 00:07:46.645 [2024-07-15 20:58:13.838239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.645 [2024-07-15 20:58:13.838268] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:940422246894996749 len:3342 00:07:46.645 [2024-07-15 20:58:13.838285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.645 #37 NEW cov: 12151 ft: 14425 corp: 21/654b lim: 50 exec/s: 37 rss: 71Mb L: 38/50 MS: 1 InsertRepeatedBytes- 00:07:46.645 [2024-07-15 20:58:13.888230] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:944925846472035597 len:3342 00:07:46.645 [2024-07-15 20:58:13.888260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.645 [2024-07-15 20:58:13.888306] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:940418948360113421 len:3342 00:07:46.645 [2024-07-15 20:58:13.888324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.645 #38 NEW cov: 12151 ft: 14434 corp: 22/675b lim: 50 exec/s: 38 rss: 71Mb L: 21/50 MS: 1 ShuffleBytes- 00:07:46.905 [2024-07-15 20:58:13.938394] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:940422246894996749 len:1 00:07:46.905 [2024-07-15 20:58:13.938425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.905 [2024-07-15 20:58:13.938482] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:940422246676368653 len:3342 00:07:46.905 [2024-07-15 20:58:13.938502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.905 #39 NEW cov: 12151 ft: 14455 corp: 23/700b lim: 50 exec/s: 39 rss: 71Mb L: 25/50 MS: 1 CopyPart- 00:07:46.905 [2024-07-15 20:58:14.018739] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:940422191060421901 len:1 00:07:46.905 [2024-07-15 20:58:14.018769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.905 [2024-07-15 20:58:14.018799] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:940422246760779021 len:3342 00:07:46.905 [2024-07-15 20:58:14.018817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.905 [2024-07-15 20:58:14.018851] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:940422246894996749 len:3342 00:07:46.905 [2024-07-15 20:58:14.018882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.905 [2024-07-15 20:58:14.018914] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:940422246894996749 len:3342 00:07:46.905 [2024-07-15 20:58:14.018930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:46.905 [2024-07-15 20:58:14.018958] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:940422246894996749 len:3397 00:07:46.905 [2024-07-15 20:58:14.018975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:46.905 #40 NEW cov: 12151 ft: 14495 corp: 24/750b lim: 50 exec/s: 40 rss: 71Mb L: 50/50 MS: 1 PersAutoDict- DE: "\000\000\000\005"- 00:07:46.905 [2024-07-15 20:58:14.068861] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:940422246894996749 len:3342 00:07:46.905 [2024-07-15 20:58:14.068891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.905 [2024-07-15 20:58:14.068921] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:432359913932262669 len:3342 00:07:46.905 [2024-07-15 20:58:14.068939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.905 [2024-07-15 20:58:14.068973] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:940422246894996749 len:3342 00:07:46.905 [2024-07-15 20:58:14.068989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.905 [2024-07-15 20:58:14.069016] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:940422246894996749 len:3342 00:07:46.905 [2024-07-15 20:58:14.069032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:46.905 [2024-07-15 20:58:14.069059] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:940422246894996749 len:3397 00:07:46.905 [2024-07-15 20:58:14.069075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:46.905 #41 NEW cov: 12151 ft: 14521 corp: 25/800b lim: 50 exec/s: 41 rss: 71Mb L: 50/50 MS: 1 CMP- DE: "\006\000"- 00:07:46.905 [2024-07-15 20:58:14.118886] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:07:46.905 [2024-07-15 20:58:14.118916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.905 [2024-07-15 20:58:14.118963] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:07:46.905 [2024-07-15 20:58:14.118989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.905 [2024-07-15 20:58:14.119018] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:07:46.905 [2024-07-15 20:58:14.119034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.905 #43 NEW cov: 12158 ft: 14567 corp: 26/835b lim: 50 exec/s: 43 rss: 71Mb L: 35/50 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:46.905 [2024-07-15 20:58:14.169047] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:278172331409152 len:65536 00:07:46.905 [2024-07-15 20:58:14.169078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.905 [2024-07-15 20:58:14.169110] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:07:46.905 [2024-07-15 20:58:14.169131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.905 [2024-07-15 20:58:14.169162] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:07:46.905 [2024-07-15 20:58:14.169178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:47.164 #44 NEW cov: 12158 ft: 14592 corp: 27/870b lim: 50 exec/s: 44 rss: 71Mb L: 35/50 MS: 1 ChangeBinInt- 00:07:47.164 [2024-07-15 20:58:14.249213] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17511661016843297536 len:60139 00:07:47.164 [2024-07-15 20:58:14.249244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.164 [2024-07-15 20:58:14.249290] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16927600444109941482 len:60139 00:07:47.164 [2024-07-15 20:58:14.249316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.164 [2024-07-15 20:58:14.249345] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16927600444109941482 len:60139 00:07:47.164 [2024-07-15 20:58:14.249361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:47.164 #45 NEW cov: 12158 ft: 14613 corp: 28/904b lim: 50 exec/s: 22 rss: 71Mb L: 34/50 MS: 1 PersAutoDict- DE: "\000\000"- 00:07:47.164 #45 DONE cov: 12158 ft: 14613 corp: 28/904b lim: 50 exec/s: 22 rss: 71Mb 00:07:47.165 ###### Recommended dictionary. ###### 00:07:47.165 "\000\000\000\005" # Uses: 3 00:07:47.165 "\000\000" # Uses: 1 00:07:47.165 "\006\000" # Uses: 0 00:07:47.165 ###### End of recommended dictionary. ###### 00:07:47.165 Done 45 runs in 2 second(s) 00:07:47.165 20:58:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:07:47.165 20:58:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:47.165 20:58:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:47.165 20:58:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:07:47.165 20:58:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:07:47.165 20:58:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:47.165 20:58:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:47.165 20:58:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:47.165 20:58:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:07:47.165 20:58:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:47.165 20:58:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:47.165 20:58:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:07:47.165 20:58:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:07:47.165 20:58:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:47.165 20:58:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:07:47.165 20:58:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:47.165 20:58:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:47.165 20:58:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:47.165 20:58:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:07:47.165 [2024-07-15 20:58:14.450955] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:47.165 [2024-07-15 20:58:14.451024] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid789606 ] 00:07:47.424 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.424 [2024-07-15 20:58:14.632891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.424 [2024-07-15 20:58:14.698724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.684 [2024-07-15 20:58:14.757975] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:47.684 [2024-07-15 20:58:14.774277] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:47.684 INFO: Running with entropic power schedule (0xFF, 100). 00:07:47.684 INFO: Seed: 1619778285 00:07:47.684 INFO: Loaded 1 modules (358191 inline 8-bit counters): 358191 [0x29b254c, 0x2a09c7b), 00:07:47.684 INFO: Loaded 1 PC tables (358191 PCs): 358191 [0x2a09c80,0x2f80f70), 00:07:47.684 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:47.684 INFO: A corpus is not provided, starting from an empty corpus 00:07:47.684 #2 INITED exec/s: 0 rss: 64Mb 00:07:47.684 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:47.684 This may also happen if the target rejected all inputs we tried so far 00:07:47.684 [2024-07-15 20:58:14.819309] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:47.684 [2024-07-15 20:58:14.819340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.943 NEW_FUNC[1/698]: 0x4a6130 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:07:47.943 NEW_FUNC[2/698]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:47.943 #38 NEW cov: 11971 ft: 11972 corp: 2/28b lim: 90 exec/s: 0 rss: 70Mb L: 27/27 MS: 1 InsertRepeatedBytes- 00:07:47.943 [2024-07-15 20:58:15.150245] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:47.943 [2024-07-15 20:58:15.150276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.943 #39 NEW cov: 12101 ft: 12399 corp: 3/55b lim: 90 exec/s: 0 rss: 70Mb L: 27/27 MS: 1 ChangeByte- 00:07:47.943 [2024-07-15 20:58:15.200299] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:47.943 [2024-07-15 20:58:15.200328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.943 #40 NEW cov: 12107 ft: 12767 corp: 4/74b lim: 90 exec/s: 0 rss: 70Mb L: 19/27 MS: 1 CrossOver- 00:07:48.203 [2024-07-15 20:58:15.240407] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:48.203 [2024-07-15 20:58:15.240436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.203 #41 NEW cov: 12192 ft: 13000 corp: 5/109b lim: 90 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 CMP- DE: "\020\000\000\000\000\000\000\000"- 00:07:48.203 [2024-07-15 20:58:15.280494] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:48.203 [2024-07-15 20:58:15.280521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.203 #42 NEW cov: 12192 ft: 13144 corp: 6/136b lim: 90 exec/s: 0 rss: 71Mb L: 27/35 MS: 1 CMP- DE: "\001\000\000\000"- 00:07:48.203 [2024-07-15 20:58:15.320643] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:48.203 [2024-07-15 20:58:15.320670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.203 #47 NEW cov: 12192 ft: 13231 corp: 7/158b lim: 90 exec/s: 0 rss: 71Mb L: 22/35 MS: 5 EraseBytes-CopyPart-EraseBytes-ChangeBinInt-InsertRepeatedBytes- 00:07:48.203 [2024-07-15 20:58:15.370766] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:48.203 [2024-07-15 20:58:15.370793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.203 #48 NEW cov: 12192 ft: 13295 corp: 8/185b lim: 90 exec/s: 0 rss: 71Mb L: 27/35 MS: 1 ChangeBit- 00:07:48.203 [2024-07-15 20:58:15.420901] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:48.203 [2024-07-15 20:58:15.420927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.203 #49 NEW cov: 12192 ft: 13313 corp: 9/212b lim: 90 exec/s: 0 rss: 71Mb L: 27/35 MS: 1 ChangeBinInt- 00:07:48.203 [2024-07-15 20:58:15.471060] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:48.203 [2024-07-15 20:58:15.471087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.203 #50 NEW cov: 12192 ft: 13423 corp: 10/239b lim: 90 exec/s: 0 rss: 71Mb L: 27/35 MS: 1 ChangeByte- 00:07:48.463 [2024-07-15 20:58:15.511283] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:48.463 [2024-07-15 20:58:15.511310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.463 [2024-07-15 20:58:15.511356] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:48.463 [2024-07-15 20:58:15.511372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.463 #51 NEW cov: 12192 ft: 14237 corp: 11/285b lim: 90 exec/s: 0 rss: 71Mb L: 46/46 MS: 1 CrossOver- 00:07:48.463 [2024-07-15 20:58:15.551269] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:48.463 [2024-07-15 20:58:15.551295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.463 #52 NEW cov: 12192 ft: 14272 corp: 12/318b lim: 90 exec/s: 0 rss: 71Mb L: 33/46 MS: 1 InsertRepeatedBytes- 00:07:48.463 [2024-07-15 20:58:15.601375] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:48.463 [2024-07-15 20:58:15.601401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.463 #53 NEW cov: 12192 ft: 14286 corp: 13/340b lim: 90 exec/s: 0 rss: 71Mb L: 22/46 MS: 1 ChangeBinInt- 00:07:48.463 [2024-07-15 20:58:15.651701] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:48.463 [2024-07-15 20:58:15.651728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.463 [2024-07-15 20:58:15.651784] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:48.463 [2024-07-15 20:58:15.651800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.463 #54 NEW cov: 12192 ft: 14304 corp: 14/386b lim: 90 exec/s: 0 rss: 71Mb L: 46/46 MS: 1 ShuffleBytes- 00:07:48.463 [2024-07-15 20:58:15.701868] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:48.463 [2024-07-15 20:58:15.701897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.463 [2024-07-15 20:58:15.701960] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:48.463 [2024-07-15 20:58:15.701976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.463 NEW_FUNC[1/1]: 0x1a7d240 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:48.463 #55 NEW cov: 12215 ft: 14406 corp: 15/439b lim: 90 exec/s: 0 rss: 72Mb L: 53/53 MS: 1 CopyPart- 00:07:48.463 [2024-07-15 20:58:15.751842] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:48.463 [2024-07-15 20:58:15.751870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.722 #56 NEW cov: 12215 ft: 14428 corp: 16/467b lim: 90 exec/s: 0 rss: 72Mb L: 28/53 MS: 1 EraseBytes- 00:07:48.722 [2024-07-15 20:58:15.801998] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:48.722 [2024-07-15 20:58:15.802024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.722 #57 NEW cov: 12215 ft: 14435 corp: 17/489b lim: 90 exec/s: 57 rss: 72Mb L: 22/53 MS: 1 CopyPart- 00:07:48.722 [2024-07-15 20:58:15.842105] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:48.722 [2024-07-15 20:58:15.842131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.722 #58 NEW cov: 12215 ft: 14510 corp: 18/517b lim: 90 exec/s: 58 rss: 72Mb L: 28/53 MS: 1 ChangeBit- 00:07:48.722 [2024-07-15 20:58:15.892268] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:48.722 [2024-07-15 20:58:15.892295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.722 #59 NEW cov: 12215 ft: 14528 corp: 19/544b lim: 90 exec/s: 59 rss: 72Mb L: 27/53 MS: 1 ShuffleBytes- 00:07:48.722 [2024-07-15 20:58:15.932361] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:48.722 [2024-07-15 20:58:15.932389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.722 #60 NEW cov: 12215 ft: 14578 corp: 20/571b lim: 90 exec/s: 60 rss: 72Mb L: 27/53 MS: 1 ChangeByte- 00:07:48.722 [2024-07-15 20:58:15.982936] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:48.722 [2024-07-15 20:58:15.982963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.722 [2024-07-15 20:58:15.983011] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:48.722 [2024-07-15 20:58:15.983026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.722 [2024-07-15 20:58:15.983083] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:48.722 [2024-07-15 20:58:15.983099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:48.722 [2024-07-15 20:58:15.983156] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:48.722 [2024-07-15 20:58:15.983184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:49.010 #61 NEW cov: 12215 ft: 14998 corp: 21/647b lim: 90 exec/s: 61 rss: 72Mb L: 76/76 MS: 1 InsertRepeatedBytes- 00:07:49.010 [2024-07-15 20:58:16.032635] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.010 [2024-07-15 20:58:16.032663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.010 #62 NEW cov: 12215 ft: 15005 corp: 22/674b lim: 90 exec/s: 62 rss: 72Mb L: 27/76 MS: 1 ChangeByte- 00:07:49.010 [2024-07-15 20:58:16.082798] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.010 [2024-07-15 20:58:16.082825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.010 #68 NEW cov: 12215 ft: 15012 corp: 23/701b lim: 90 exec/s: 68 rss: 72Mb L: 27/76 MS: 1 ChangeBinInt- 00:07:49.010 [2024-07-15 20:58:16.122861] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.010 [2024-07-15 20:58:16.122889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.010 #69 NEW cov: 12215 ft: 15066 corp: 24/723b lim: 90 exec/s: 69 rss: 72Mb L: 22/76 MS: 1 ChangeByte- 00:07:49.010 [2024-07-15 20:58:16.173017] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.010 [2024-07-15 20:58:16.173044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.010 #70 NEW cov: 12215 ft: 15080 corp: 25/750b lim: 90 exec/s: 70 rss: 72Mb L: 27/76 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:07:49.010 [2024-07-15 20:58:16.213154] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.010 [2024-07-15 20:58:16.213182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.010 #71 NEW cov: 12215 ft: 15114 corp: 26/772b lim: 90 exec/s: 71 rss: 72Mb L: 22/76 MS: 1 ChangeBit- 00:07:49.010 [2024-07-15 20:58:16.253275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.010 [2024-07-15 20:58:16.253301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.010 #72 NEW cov: 12215 ft: 15130 corp: 27/799b lim: 90 exec/s: 72 rss: 72Mb L: 27/76 MS: 1 ChangeBit- 00:07:49.010 [2024-07-15 20:58:16.293554] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.010 [2024-07-15 20:58:16.293581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.010 [2024-07-15 20:58:16.293627] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:49.010 [2024-07-15 20:58:16.293643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.268 #73 NEW cov: 12215 ft: 15163 corp: 28/835b lim: 90 exec/s: 73 rss: 72Mb L: 36/76 MS: 1 InsertByte- 00:07:49.268 [2024-07-15 20:58:16.333462] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.268 [2024-07-15 20:58:16.333489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.268 #74 NEW cov: 12215 ft: 15251 corp: 29/857b lim: 90 exec/s: 74 rss: 73Mb L: 22/76 MS: 1 ChangeBit- 00:07:49.268 [2024-07-15 20:58:16.383629] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.268 [2024-07-15 20:58:16.383655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.268 #75 NEW cov: 12215 ft: 15258 corp: 30/884b lim: 90 exec/s: 75 rss: 73Mb L: 27/76 MS: 1 PersAutoDict- DE: "\020\000\000\000\000\000\000\000"- 00:07:49.268 [2024-07-15 20:58:16.423731] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.268 [2024-07-15 20:58:16.423757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.268 #76 NEW cov: 12215 ft: 15279 corp: 31/915b lim: 90 exec/s: 76 rss: 73Mb L: 31/76 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:07:49.268 [2024-07-15 20:58:16.473851] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.268 [2024-07-15 20:58:16.473881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.268 #79 NEW cov: 12215 ft: 15299 corp: 32/940b lim: 90 exec/s: 79 rss: 73Mb L: 25/76 MS: 3 InsertByte-ChangeBit-CrossOver- 00:07:49.268 [2024-07-15 20:58:16.514131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.268 [2024-07-15 20:58:16.514158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.268 [2024-07-15 20:58:16.514215] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:49.268 [2024-07-15 20:58:16.514231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.268 #80 NEW cov: 12215 ft: 15381 corp: 33/987b lim: 90 exec/s: 80 rss: 73Mb L: 47/76 MS: 1 InsertByte- 00:07:49.527 [2024-07-15 20:58:16.564424] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.527 [2024-07-15 20:58:16.564456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.527 [2024-07-15 20:58:16.564510] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:49.527 [2024-07-15 20:58:16.564524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.527 [2024-07-15 20:58:16.564580] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:49.527 [2024-07-15 20:58:16.564597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.527 #81 NEW cov: 12215 ft: 15657 corp: 34/1043b lim: 90 exec/s: 81 rss: 73Mb L: 56/76 MS: 1 CopyPart- 00:07:49.527 [2024-07-15 20:58:16.604518] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.527 [2024-07-15 20:58:16.604545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.527 [2024-07-15 20:58:16.604583] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:49.527 [2024-07-15 20:58:16.604600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.527 [2024-07-15 20:58:16.604652] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:49.527 [2024-07-15 20:58:16.604669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.527 #82 NEW cov: 12215 ft: 15664 corp: 35/1098b lim: 90 exec/s: 82 rss: 73Mb L: 55/76 MS: 1 CMP- DE: ".\224G\363\342D+\000"- 00:07:49.527 [2024-07-15 20:58:16.654360] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.527 [2024-07-15 20:58:16.654386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.527 #83 NEW cov: 12215 ft: 15684 corp: 36/1125b lim: 90 exec/s: 83 rss: 73Mb L: 27/76 MS: 1 ChangeByte- 00:07:49.527 [2024-07-15 20:58:16.694640] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.527 [2024-07-15 20:58:16.694666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.527 [2024-07-15 20:58:16.694704] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:49.527 [2024-07-15 20:58:16.694721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.527 #84 NEW cov: 12215 ft: 15699 corp: 37/1162b lim: 90 exec/s: 84 rss: 73Mb L: 37/76 MS: 1 InsertByte- 00:07:49.527 [2024-07-15 20:58:16.744615] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.527 [2024-07-15 20:58:16.744642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.527 #85 NEW cov: 12215 ft: 15724 corp: 38/1189b lim: 90 exec/s: 85 rss: 73Mb L: 27/76 MS: 1 ShuffleBytes- 00:07:49.527 [2024-07-15 20:58:16.784746] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:49.527 [2024-07-15 20:58:16.784772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.786 #86 NEW cov: 12215 ft: 15804 corp: 39/1220b lim: 90 exec/s: 43 rss: 73Mb L: 31/76 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:07:49.786 #86 DONE cov: 12215 ft: 15804 corp: 39/1220b lim: 90 exec/s: 43 rss: 73Mb 00:07:49.786 ###### Recommended dictionary. ###### 00:07:49.786 "\020\000\000\000\000\000\000\000" # Uses: 1 00:07:49.786 "\001\000\000\000" # Uses: 3 00:07:49.786 ".\224G\363\342D+\000" # Uses: 0 00:07:49.786 ###### End of recommended dictionary. ###### 00:07:49.786 Done 86 runs in 2 second(s) 00:07:49.786 20:58:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:07:49.786 20:58:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:49.786 20:58:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:49.786 20:58:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:07:49.786 20:58:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:07:49.786 20:58:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:49.786 20:58:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:49.786 20:58:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:49.786 20:58:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:07:49.786 20:58:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:49.786 20:58:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:49.786 20:58:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:07:49.786 20:58:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:07:49.786 20:58:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:49.786 20:58:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:07:49.786 20:58:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:49.786 20:58:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:49.786 20:58:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:49.786 20:58:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:07:49.786 [2024-07-15 20:58:16.987916] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:49.786 [2024-07-15 20:58:16.987987] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid790016 ] 00:07:49.786 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.046 [2024-07-15 20:58:17.167388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.046 [2024-07-15 20:58:17.232768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.046 [2024-07-15 20:58:17.291877] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.046 [2024-07-15 20:58:17.308163] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:07:50.046 INFO: Running with entropic power schedule (0xFF, 100). 00:07:50.046 INFO: Seed: 4152762941 00:07:50.305 INFO: Loaded 1 modules (358191 inline 8-bit counters): 358191 [0x29b254c, 0x2a09c7b), 00:07:50.305 INFO: Loaded 1 PC tables (358191 PCs): 358191 [0x2a09c80,0x2f80f70), 00:07:50.305 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:50.305 INFO: A corpus is not provided, starting from an empty corpus 00:07:50.305 #2 INITED exec/s: 0 rss: 64Mb 00:07:50.305 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:50.305 This may also happen if the target rejected all inputs we tried so far 00:07:50.305 [2024-07-15 20:58:17.377561] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:50.305 [2024-07-15 20:58:17.377596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.563 NEW_FUNC[1/698]: 0x4a9350 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:07:50.563 NEW_FUNC[2/698]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:50.563 #26 NEW cov: 11946 ft: 11947 corp: 2/15b lim: 50 exec/s: 0 rss: 70Mb L: 14/14 MS: 4 InsertByte-ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:07:50.563 [2024-07-15 20:58:17.728305] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:50.563 [2024-07-15 20:58:17.728350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.563 #27 NEW cov: 12076 ft: 12674 corp: 3/29b lim: 50 exec/s: 0 rss: 70Mb L: 14/14 MS: 1 CrossOver- 00:07:50.563 [2024-07-15 20:58:17.778365] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:50.563 [2024-07-15 20:58:17.778399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.563 #28 NEW cov: 12082 ft: 12930 corp: 4/43b lim: 50 exec/s: 0 rss: 71Mb L: 14/14 MS: 1 ChangeByte- 00:07:50.563 [2024-07-15 20:58:17.828474] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:50.563 [2024-07-15 20:58:17.828502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.563 #31 NEW cov: 12167 ft: 13256 corp: 5/60b lim: 50 exec/s: 0 rss: 71Mb L: 17/17 MS: 3 CMP-CopyPart-CrossOver- DE: "\001.Yv\021\000\000\000"- 00:07:50.821 [2024-07-15 20:58:17.868676] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:50.821 [2024-07-15 20:58:17.868701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.821 #32 NEW cov: 12167 ft: 13303 corp: 6/74b lim: 50 exec/s: 0 rss: 71Mb L: 14/17 MS: 1 ChangeBit- 00:07:50.821 [2024-07-15 20:58:17.908749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:50.821 [2024-07-15 20:58:17.908775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.821 #35 NEW cov: 12167 ft: 13356 corp: 7/84b lim: 50 exec/s: 0 rss: 71Mb L: 10/17 MS: 3 ChangeByte-CopyPart-PersAutoDict- DE: "\001.Yv\021\000\000\000"- 00:07:50.821 [2024-07-15 20:58:17.948876] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:50.821 [2024-07-15 20:58:17.948908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.821 #36 NEW cov: 12167 ft: 13413 corp: 8/102b lim: 50 exec/s: 0 rss: 71Mb L: 18/18 MS: 1 InsertRepeatedBytes- 00:07:50.821 [2024-07-15 20:58:17.989036] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:50.821 [2024-07-15 20:58:17.989062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.821 #37 NEW cov: 12167 ft: 13446 corp: 9/120b lim: 50 exec/s: 0 rss: 71Mb L: 18/18 MS: 1 ShuffleBytes- 00:07:50.821 [2024-07-15 20:58:18.039138] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:50.821 [2024-07-15 20:58:18.039166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:50.821 #38 NEW cov: 12167 ft: 13513 corp: 10/137b lim: 50 exec/s: 0 rss: 71Mb L: 17/18 MS: 1 ChangeBinInt- 00:07:50.821 [2024-07-15 20:58:18.089280] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:50.821 [2024-07-15 20:58:18.089313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.078 #42 NEW cov: 12167 ft: 13619 corp: 11/148b lim: 50 exec/s: 0 rss: 71Mb L: 11/18 MS: 4 EraseBytes-ChangeBit-EraseBytes-InsertRepeatedBytes- 00:07:51.078 [2024-07-15 20:58:18.139908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:51.078 [2024-07-15 20:58:18.139941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.078 [2024-07-15 20:58:18.140045] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:51.078 [2024-07-15 20:58:18.140071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.078 [2024-07-15 20:58:18.140180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:51.078 [2024-07-15 20:58:18.140205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:51.078 #43 NEW cov: 12167 ft: 14517 corp: 12/179b lim: 50 exec/s: 0 rss: 71Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:07:51.078 [2024-07-15 20:58:18.189555] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:51.078 [2024-07-15 20:58:18.189580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.078 #44 NEW cov: 12167 ft: 14590 corp: 13/193b lim: 50 exec/s: 0 rss: 71Mb L: 14/31 MS: 1 CopyPart- 00:07:51.078 [2024-07-15 20:58:18.229683] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:51.078 [2024-07-15 20:58:18.229710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.078 NEW_FUNC[1/1]: 0x1a7d240 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:51.078 #45 NEW cov: 12190 ft: 14689 corp: 14/207b lim: 50 exec/s: 0 rss: 72Mb L: 14/31 MS: 1 ShuffleBytes- 00:07:51.078 [2024-07-15 20:58:18.279825] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:51.078 [2024-07-15 20:58:18.279856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.078 #51 NEW cov: 12190 ft: 14741 corp: 15/221b lim: 50 exec/s: 0 rss: 72Mb L: 14/31 MS: 1 ShuffleBytes- 00:07:51.078 [2024-07-15 20:58:18.330005] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:51.078 [2024-07-15 20:58:18.330035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.078 #52 NEW cov: 12190 ft: 14764 corp: 16/239b lim: 50 exec/s: 0 rss: 72Mb L: 18/31 MS: 1 ChangeBinInt- 00:07:51.078 [2024-07-15 20:58:18.370121] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:51.078 [2024-07-15 20:58:18.370157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.337 #53 NEW cov: 12190 ft: 14802 corp: 17/253b lim: 50 exec/s: 53 rss: 72Mb L: 14/31 MS: 1 ChangeBit- 00:07:51.337 [2024-07-15 20:58:18.410197] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:51.337 [2024-07-15 20:58:18.410229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.337 #54 NEW cov: 12190 ft: 14809 corp: 18/263b lim: 50 exec/s: 54 rss: 72Mb L: 10/31 MS: 1 CMP- DE: "\377\377\377\377"- 00:07:51.337 [2024-07-15 20:58:18.450351] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:51.337 [2024-07-15 20:58:18.450386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.337 #56 NEW cov: 12190 ft: 14840 corp: 19/276b lim: 50 exec/s: 56 rss: 72Mb L: 13/31 MS: 2 EraseBytes-InsertRepeatedBytes- 00:07:51.337 [2024-07-15 20:58:18.490496] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:51.337 [2024-07-15 20:58:18.490521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.337 #57 NEW cov: 12190 ft: 14855 corp: 20/294b lim: 50 exec/s: 57 rss: 72Mb L: 18/31 MS: 1 ChangeBit- 00:07:51.337 [2024-07-15 20:58:18.540579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:51.337 [2024-07-15 20:58:18.540610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.337 #58 NEW cov: 12190 ft: 14866 corp: 21/307b lim: 50 exec/s: 58 rss: 72Mb L: 13/31 MS: 1 ChangeByte- 00:07:51.337 [2024-07-15 20:58:18.590707] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:51.337 [2024-07-15 20:58:18.590736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.337 #59 NEW cov: 12190 ft: 14880 corp: 22/320b lim: 50 exec/s: 59 rss: 72Mb L: 13/31 MS: 1 ShuffleBytes- 00:07:51.595 [2024-07-15 20:58:18.640843] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:51.595 [2024-07-15 20:58:18.640875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.595 #60 NEW cov: 12190 ft: 14889 corp: 23/337b lim: 50 exec/s: 60 rss: 72Mb L: 17/31 MS: 1 ChangeByte- 00:07:51.595 [2024-07-15 20:58:18.691019] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:51.595 [2024-07-15 20:58:18.691044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.595 #61 NEW cov: 12190 ft: 14907 corp: 24/355b lim: 50 exec/s: 61 rss: 72Mb L: 18/31 MS: 1 ChangeByte- 00:07:51.595 [2024-07-15 20:58:18.731423] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:51.595 [2024-07-15 20:58:18.731451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.595 [2024-07-15 20:58:18.731575] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:51.595 [2024-07-15 20:58:18.731596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.595 #62 NEW cov: 12190 ft: 15170 corp: 25/376b lim: 50 exec/s: 62 rss: 72Mb L: 21/31 MS: 1 CopyPart- 00:07:51.595 [2024-07-15 20:58:18.771828] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:51.595 [2024-07-15 20:58:18.771860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.595 [2024-07-15 20:58:18.771984] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:51.595 [2024-07-15 20:58:18.772011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.595 [2024-07-15 20:58:18.772125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:51.595 [2024-07-15 20:58:18.772147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:51.595 #63 NEW cov: 12190 ft: 15215 corp: 26/407b lim: 50 exec/s: 63 rss: 72Mb L: 31/31 MS: 1 ChangeBit- 00:07:51.595 [2024-07-15 20:58:18.831408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:51.595 [2024-07-15 20:58:18.831438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.595 #64 NEW cov: 12190 ft: 15242 corp: 27/426b lim: 50 exec/s: 64 rss: 72Mb L: 19/31 MS: 1 InsertByte- 00:07:51.595 [2024-07-15 20:58:18.871569] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:51.595 [2024-07-15 20:58:18.871599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.853 #65 NEW cov: 12190 ft: 15255 corp: 28/438b lim: 50 exec/s: 65 rss: 72Mb L: 12/31 MS: 1 EraseBytes- 00:07:51.853 [2024-07-15 20:58:18.921726] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:51.853 [2024-07-15 20:58:18.921751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.853 #66 NEW cov: 12190 ft: 15278 corp: 29/452b lim: 50 exec/s: 66 rss: 72Mb L: 14/31 MS: 1 ChangeByte- 00:07:51.853 [2024-07-15 20:58:18.961785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:51.853 [2024-07-15 20:58:18.961816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.853 #67 NEW cov: 12190 ft: 15298 corp: 30/466b lim: 50 exec/s: 67 rss: 72Mb L: 14/31 MS: 1 ChangeBinInt- 00:07:51.853 [2024-07-15 20:58:19.012495] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:51.853 [2024-07-15 20:58:19.012528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.853 [2024-07-15 20:58:19.012628] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:51.853 [2024-07-15 20:58:19.012649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.853 [2024-07-15 20:58:19.012760] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:51.853 [2024-07-15 20:58:19.012779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:51.853 #68 NEW cov: 12190 ft: 15311 corp: 31/497b lim: 50 exec/s: 68 rss: 72Mb L: 31/31 MS: 1 ShuffleBytes- 00:07:51.853 [2024-07-15 20:58:19.052164] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:51.853 [2024-07-15 20:58:19.052194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.853 #69 NEW cov: 12190 ft: 15312 corp: 32/515b lim: 50 exec/s: 69 rss: 73Mb L: 18/31 MS: 1 ChangeByte- 00:07:51.853 [2024-07-15 20:58:19.102326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:51.853 [2024-07-15 20:58:19.102353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.853 #70 NEW cov: 12190 ft: 15324 corp: 33/533b lim: 50 exec/s: 70 rss: 73Mb L: 18/31 MS: 1 CMP- DE: "\000\000\000\000"- 00:07:52.111 [2024-07-15 20:58:19.152410] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:52.111 [2024-07-15 20:58:19.152446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.111 #71 NEW cov: 12190 ft: 15334 corp: 34/547b lim: 50 exec/s: 71 rss: 73Mb L: 14/31 MS: 1 ChangeByte- 00:07:52.111 [2024-07-15 20:58:19.192480] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:52.112 [2024-07-15 20:58:19.192520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.112 #72 NEW cov: 12190 ft: 15352 corp: 35/564b lim: 50 exec/s: 72 rss: 73Mb L: 17/31 MS: 1 PersAutoDict- DE: "\001.Yv\021\000\000\000"- 00:07:52.112 [2024-07-15 20:58:19.232611] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:52.112 [2024-07-15 20:58:19.232639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.112 #73 NEW cov: 12190 ft: 15362 corp: 36/582b lim: 50 exec/s: 73 rss: 73Mb L: 18/31 MS: 1 ChangeBinInt- 00:07:52.112 [2024-07-15 20:58:19.272791] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:52.112 [2024-07-15 20:58:19.272823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.112 #74 NEW cov: 12190 ft: 15373 corp: 37/592b lim: 50 exec/s: 74 rss: 73Mb L: 10/31 MS: 1 CopyPart- 00:07:52.112 [2024-07-15 20:58:19.322885] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:52.112 [2024-07-15 20:58:19.322909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.112 #75 NEW cov: 12190 ft: 15422 corp: 38/606b lim: 50 exec/s: 75 rss: 73Mb L: 14/31 MS: 1 ChangeByte- 00:07:52.112 [2024-07-15 20:58:19.363032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:52.112 [2024-07-15 20:58:19.363059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.112 #76 NEW cov: 12190 ft: 15430 corp: 39/620b lim: 50 exec/s: 38 rss: 73Mb L: 14/31 MS: 1 ChangeByte- 00:07:52.112 #76 DONE cov: 12190 ft: 15430 corp: 39/620b lim: 50 exec/s: 38 rss: 73Mb 00:07:52.112 ###### Recommended dictionary. ###### 00:07:52.112 "\001.Yv\021\000\000\000" # Uses: 2 00:07:52.112 "\377\377\377\377" # Uses: 0 00:07:52.112 "\000\000\000\000" # Uses: 0 00:07:52.112 ###### End of recommended dictionary. ###### 00:07:52.112 Done 76 runs in 2 second(s) 00:07:52.371 20:58:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:07:52.371 20:58:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:52.371 20:58:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:52.371 20:58:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:07:52.371 20:58:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:07:52.371 20:58:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:52.371 20:58:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:52.371 20:58:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:52.371 20:58:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:07:52.371 20:58:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:52.371 20:58:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:52.371 20:58:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:07:52.371 20:58:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:07:52.371 20:58:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:52.371 20:58:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:07:52.371 20:58:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:52.371 20:58:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:52.371 20:58:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:52.371 20:58:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:07:52.371 [2024-07-15 20:58:19.550784] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:52.371 [2024-07-15 20:58:19.550871] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid790545 ] 00:07:52.371 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.630 [2024-07-15 20:58:19.725646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.630 [2024-07-15 20:58:19.790177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.630 [2024-07-15 20:58:19.848988] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.630 [2024-07-15 20:58:19.865247] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:07:52.630 INFO: Running with entropic power schedule (0xFF, 100). 00:07:52.630 INFO: Seed: 2413822924 00:07:52.630 INFO: Loaded 1 modules (358191 inline 8-bit counters): 358191 [0x29b254c, 0x2a09c7b), 00:07:52.630 INFO: Loaded 1 PC tables (358191 PCs): 358191 [0x2a09c80,0x2f80f70), 00:07:52.630 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:52.630 INFO: A corpus is not provided, starting from an empty corpus 00:07:52.630 #2 INITED exec/s: 0 rss: 63Mb 00:07:52.630 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:52.630 This may also happen if the target rejected all inputs we tried so far 00:07:52.630 [2024-07-15 20:58:19.913059] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:52.630 [2024-07-15 20:58:19.913089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.630 [2024-07-15 20:58:19.913141] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:52.630 [2024-07-15 20:58:19.913156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.630 [2024-07-15 20:58:19.913209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:52.630 [2024-07-15 20:58:19.913225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.148 NEW_FUNC[1/698]: 0x4ab610 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:07:53.148 NEW_FUNC[2/698]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:53.148 #5 NEW cov: 11972 ft: 11973 corp: 2/57b lim: 85 exec/s: 0 rss: 70Mb L: 56/56 MS: 3 InsertByte-ChangeByte-InsertRepeatedBytes- 00:07:53.148 [2024-07-15 20:58:20.254884] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:53.148 [2024-07-15 20:58:20.254951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.148 [2024-07-15 20:58:20.255102] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:53.148 [2024-07-15 20:58:20.255137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.148 [2024-07-15 20:58:20.255283] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:53.148 [2024-07-15 20:58:20.255320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.148 #6 NEW cov: 12102 ft: 12817 corp: 3/113b lim: 85 exec/s: 0 rss: 70Mb L: 56/56 MS: 1 ChangeByte- 00:07:53.148 [2024-07-15 20:58:20.314926] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:53.148 [2024-07-15 20:58:20.314959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.148 [2024-07-15 20:58:20.315068] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:53.148 [2024-07-15 20:58:20.315087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.148 [2024-07-15 20:58:20.315204] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:53.148 [2024-07-15 20:58:20.315227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.148 #7 NEW cov: 12108 ft: 13057 corp: 4/169b lim: 85 exec/s: 0 rss: 70Mb L: 56/56 MS: 1 ChangeBit- 00:07:53.148 [2024-07-15 20:58:20.364983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:53.148 [2024-07-15 20:58:20.365013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.148 [2024-07-15 20:58:20.365117] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:53.148 [2024-07-15 20:58:20.365137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.148 [2024-07-15 20:58:20.365247] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:53.148 [2024-07-15 20:58:20.365268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.148 #8 NEW cov: 12193 ft: 13335 corp: 5/225b lim: 85 exec/s: 0 rss: 70Mb L: 56/56 MS: 1 ShuffleBytes- 00:07:53.148 [2024-07-15 20:58:20.405244] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:53.148 [2024-07-15 20:58:20.405274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.148 [2024-07-15 20:58:20.405373] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:53.148 [2024-07-15 20:58:20.405394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.148 [2024-07-15 20:58:20.405514] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:53.148 [2024-07-15 20:58:20.405533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.148 [2024-07-15 20:58:20.405651] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:53.148 [2024-07-15 20:58:20.405673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:53.148 #9 NEW cov: 12193 ft: 13715 corp: 6/306b lim: 85 exec/s: 0 rss: 70Mb L: 81/81 MS: 1 InsertRepeatedBytes- 00:07:53.407 [2024-07-15 20:58:20.445457] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:53.407 [2024-07-15 20:58:20.445489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.407 [2024-07-15 20:58:20.445597] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:53.408 [2024-07-15 20:58:20.445624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.408 [2024-07-15 20:58:20.445739] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:53.408 [2024-07-15 20:58:20.445766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.408 [2024-07-15 20:58:20.445880] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:53.408 [2024-07-15 20:58:20.445901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:53.408 #10 NEW cov: 12193 ft: 13767 corp: 7/387b lim: 85 exec/s: 0 rss: 70Mb L: 81/81 MS: 1 ChangeBit- 00:07:53.408 [2024-07-15 20:58:20.495678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:53.408 [2024-07-15 20:58:20.495710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.408 [2024-07-15 20:58:20.495825] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:53.408 [2024-07-15 20:58:20.495846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.408 [2024-07-15 20:58:20.495976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:53.408 [2024-07-15 20:58:20.495998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.408 [2024-07-15 20:58:20.496118] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:53.408 [2024-07-15 20:58:20.496142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:53.408 #11 NEW cov: 12193 ft: 13829 corp: 8/468b lim: 85 exec/s: 0 rss: 70Mb L: 81/81 MS: 1 ChangeBit- 00:07:53.408 [2024-07-15 20:58:20.545498] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:53.408 [2024-07-15 20:58:20.545528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.408 [2024-07-15 20:58:20.545615] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:53.408 [2024-07-15 20:58:20.545638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.408 [2024-07-15 20:58:20.545757] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:53.408 [2024-07-15 20:58:20.545780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.408 #12 NEW cov: 12193 ft: 13848 corp: 9/525b lim: 85 exec/s: 0 rss: 70Mb L: 57/81 MS: 1 InsertByte- 00:07:53.408 [2024-07-15 20:58:20.595721] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:53.408 [2024-07-15 20:58:20.595750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.408 [2024-07-15 20:58:20.595845] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:53.408 [2024-07-15 20:58:20.595864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.408 [2024-07-15 20:58:20.595983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:53.408 [2024-07-15 20:58:20.596006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.408 #13 NEW cov: 12193 ft: 13950 corp: 10/581b lim: 85 exec/s: 0 rss: 71Mb L: 56/81 MS: 1 CMP- DE: "\000\000\000\177"- 00:07:53.408 [2024-07-15 20:58:20.635770] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:53.408 [2024-07-15 20:58:20.635800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.408 [2024-07-15 20:58:20.635884] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:53.408 [2024-07-15 20:58:20.635908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.408 [2024-07-15 20:58:20.636024] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:53.408 [2024-07-15 20:58:20.636046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.408 #14 NEW cov: 12193 ft: 14025 corp: 11/637b lim: 85 exec/s: 0 rss: 71Mb L: 56/81 MS: 1 ShuffleBytes- 00:07:53.408 [2024-07-15 20:58:20.686152] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:53.408 [2024-07-15 20:58:20.686184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.408 [2024-07-15 20:58:20.686251] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:53.408 [2024-07-15 20:58:20.686275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.408 [2024-07-15 20:58:20.686388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:53.408 [2024-07-15 20:58:20.686414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.408 [2024-07-15 20:58:20.686530] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:53.408 [2024-07-15 20:58:20.686550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:53.668 #16 NEW cov: 12193 ft: 14100 corp: 12/705b lim: 85 exec/s: 0 rss: 71Mb L: 68/81 MS: 2 PersAutoDict-InsertRepeatedBytes- DE: "\000\000\000\177"- 00:07:53.668 [2024-07-15 20:58:20.725788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:53.668 [2024-07-15 20:58:20.725820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.668 [2024-07-15 20:58:20.725941] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:53.668 [2024-07-15 20:58:20.725962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.668 #17 NEW cov: 12193 ft: 14419 corp: 13/750b lim: 85 exec/s: 0 rss: 71Mb L: 45/81 MS: 1 CrossOver- 00:07:53.668 [2024-07-15 20:58:20.776411] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:53.668 [2024-07-15 20:58:20.776447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.668 [2024-07-15 20:58:20.776516] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:53.668 [2024-07-15 20:58:20.776539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.668 [2024-07-15 20:58:20.776655] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:53.668 [2024-07-15 20:58:20.776679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.668 [2024-07-15 20:58:20.776791] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:53.668 [2024-07-15 20:58:20.776814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:53.668 NEW_FUNC[1/1]: 0x1a7d240 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:53.668 #18 NEW cov: 12216 ft: 14441 corp: 14/832b lim: 85 exec/s: 0 rss: 71Mb L: 82/82 MS: 1 InsertByte- 00:07:53.668 [2024-07-15 20:58:20.816705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:53.668 [2024-07-15 20:58:20.816736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.668 [2024-07-15 20:58:20.816821] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:53.668 [2024-07-15 20:58:20.816840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.668 [2024-07-15 20:58:20.816954] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:53.668 [2024-07-15 20:58:20.816980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.668 [2024-07-15 20:58:20.817089] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:53.668 [2024-07-15 20:58:20.817112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:53.668 [2024-07-15 20:58:20.817223] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:07:53.668 [2024-07-15 20:58:20.817244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:53.668 #19 NEW cov: 12216 ft: 14485 corp: 15/917b lim: 85 exec/s: 0 rss: 71Mb L: 85/85 MS: 1 CMP- DE: "\005\000\000\000"- 00:07:53.668 [2024-07-15 20:58:20.856606] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:53.668 [2024-07-15 20:58:20.856635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.668 [2024-07-15 20:58:20.856712] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:53.668 [2024-07-15 20:58:20.856735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.668 [2024-07-15 20:58:20.856847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:53.668 [2024-07-15 20:58:20.856867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.668 [2024-07-15 20:58:20.856995] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:53.668 [2024-07-15 20:58:20.857019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:53.668 #20 NEW cov: 12216 ft: 14511 corp: 16/998b lim: 85 exec/s: 0 rss: 71Mb L: 81/85 MS: 1 ShuffleBytes- 00:07:53.668 [2024-07-15 20:58:20.907027] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:53.668 [2024-07-15 20:58:20.907055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.668 [2024-07-15 20:58:20.907146] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:53.668 [2024-07-15 20:58:20.907168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.668 [2024-07-15 20:58:20.907281] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:53.668 [2024-07-15 20:58:20.907303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.668 [2024-07-15 20:58:20.907415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:53.668 [2024-07-15 20:58:20.907435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:53.668 [2024-07-15 20:58:20.907561] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:07:53.668 [2024-07-15 20:58:20.907582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:53.668 #21 NEW cov: 12216 ft: 14521 corp: 17/1083b lim: 85 exec/s: 21 rss: 71Mb L: 85/85 MS: 1 ChangeBit- 00:07:53.668 [2024-07-15 20:58:20.956788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:53.668 [2024-07-15 20:58:20.956819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.668 [2024-07-15 20:58:20.956941] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:53.668 [2024-07-15 20:58:20.956966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.668 [2024-07-15 20:58:20.957079] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:53.668 [2024-07-15 20:58:20.957102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.928 #22 NEW cov: 12216 ft: 14529 corp: 18/1136b lim: 85 exec/s: 22 rss: 71Mb L: 53/85 MS: 1 CMP- DE: "\013\231bz\345D+\000"- 00:07:53.928 [2024-07-15 20:58:21.006791] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:53.928 [2024-07-15 20:58:21.006818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.928 [2024-07-15 20:58:21.006905] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:53.928 [2024-07-15 20:58:21.006925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.928 [2024-07-15 20:58:21.007045] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:53.928 [2024-07-15 20:58:21.007067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.928 #23 NEW cov: 12216 ft: 14565 corp: 19/1193b lim: 85 exec/s: 23 rss: 71Mb L: 57/85 MS: 1 ChangeByte- 00:07:53.928 [2024-07-15 20:58:21.057041] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:53.928 [2024-07-15 20:58:21.057074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.928 [2024-07-15 20:58:21.057198] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:53.928 [2024-07-15 20:58:21.057225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.928 [2024-07-15 20:58:21.057334] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:53.928 [2024-07-15 20:58:21.057354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.928 #24 NEW cov: 12216 ft: 14661 corp: 20/1249b lim: 85 exec/s: 24 rss: 71Mb L: 56/85 MS: 1 CopyPart- 00:07:53.928 [2024-07-15 20:58:21.097390] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:53.928 [2024-07-15 20:58:21.097420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.928 [2024-07-15 20:58:21.097544] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:53.928 [2024-07-15 20:58:21.097567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.928 [2024-07-15 20:58:21.097689] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:53.928 [2024-07-15 20:58:21.097713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.928 [2024-07-15 20:58:21.097832] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:53.928 [2024-07-15 20:58:21.097856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:53.928 #25 NEW cov: 12216 ft: 14673 corp: 21/1317b lim: 85 exec/s: 25 rss: 71Mb L: 68/85 MS: 1 ChangeBinInt- 00:07:53.928 [2024-07-15 20:58:21.147208] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:53.928 [2024-07-15 20:58:21.147235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.928 [2024-07-15 20:58:21.147330] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:53.928 [2024-07-15 20:58:21.147352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.928 [2024-07-15 20:58:21.147476] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:53.928 [2024-07-15 20:58:21.147502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.928 #26 NEW cov: 12216 ft: 14681 corp: 22/1373b lim: 85 exec/s: 26 rss: 71Mb L: 56/85 MS: 1 ChangeByte- 00:07:53.928 [2024-07-15 20:58:21.187514] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:53.928 [2024-07-15 20:58:21.187544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.928 [2024-07-15 20:58:21.187645] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:53.928 [2024-07-15 20:58:21.187664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.928 [2024-07-15 20:58:21.187782] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:53.928 [2024-07-15 20:58:21.187802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.928 [2024-07-15 20:58:21.187919] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:53.928 [2024-07-15 20:58:21.187943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:53.928 #27 NEW cov: 12216 ft: 14707 corp: 23/1454b lim: 85 exec/s: 27 rss: 71Mb L: 81/85 MS: 1 ShuffleBytes- 00:07:54.192 [2024-07-15 20:58:21.227642] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.192 [2024-07-15 20:58:21.227674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.192 [2024-07-15 20:58:21.227780] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.192 [2024-07-15 20:58:21.227799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.192 [2024-07-15 20:58:21.227916] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.192 [2024-07-15 20:58:21.227941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.192 [2024-07-15 20:58:21.228062] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:54.192 [2024-07-15 20:58:21.228086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.192 #28 NEW cov: 12216 ft: 14729 corp: 24/1535b lim: 85 exec/s: 28 rss: 72Mb L: 81/85 MS: 1 PersAutoDict- DE: "\013\231bz\345D+\000"- 00:07:54.192 [2024-07-15 20:58:21.277797] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.192 [2024-07-15 20:58:21.277828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.192 [2024-07-15 20:58:21.277916] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.192 [2024-07-15 20:58:21.277938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.192 [2024-07-15 20:58:21.278052] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.192 [2024-07-15 20:58:21.278071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.192 [2024-07-15 20:58:21.278187] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:54.192 [2024-07-15 20:58:21.278206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.192 #29 NEW cov: 12216 ft: 14738 corp: 25/1608b lim: 85 exec/s: 29 rss: 72Mb L: 73/85 MS: 1 EraseBytes- 00:07:54.192 [2024-07-15 20:58:21.317923] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.192 [2024-07-15 20:58:21.317950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.192 [2024-07-15 20:58:21.318046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.192 [2024-07-15 20:58:21.318063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.192 [2024-07-15 20:58:21.318184] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.192 [2024-07-15 20:58:21.318206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.192 [2024-07-15 20:58:21.318330] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:54.192 [2024-07-15 20:58:21.318357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.192 #30 NEW cov: 12216 ft: 14753 corp: 26/1690b lim: 85 exec/s: 30 rss: 72Mb L: 82/85 MS: 1 ChangeBit- 00:07:54.192 [2024-07-15 20:58:21.368091] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.192 [2024-07-15 20:58:21.368121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.192 [2024-07-15 20:58:21.368203] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.192 [2024-07-15 20:58:21.368239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.192 [2024-07-15 20:58:21.368354] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.192 [2024-07-15 20:58:21.368381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.192 [2024-07-15 20:58:21.368503] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:54.192 [2024-07-15 20:58:21.368527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.192 #31 NEW cov: 12216 ft: 14757 corp: 27/1771b lim: 85 exec/s: 31 rss: 72Mb L: 81/85 MS: 1 ChangeBit- 00:07:54.192 [2024-07-15 20:58:21.408005] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.192 [2024-07-15 20:58:21.408035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.192 [2024-07-15 20:58:21.408150] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.192 [2024-07-15 20:58:21.408174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.192 [2024-07-15 20:58:21.408297] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.192 [2024-07-15 20:58:21.408320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.192 #32 NEW cov: 12216 ft: 14758 corp: 28/1827b lim: 85 exec/s: 32 rss: 72Mb L: 56/85 MS: 1 CrossOver- 00:07:54.192 [2024-07-15 20:58:21.448297] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.192 [2024-07-15 20:58:21.448330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.192 [2024-07-15 20:58:21.448439] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.192 [2024-07-15 20:58:21.448463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.192 [2024-07-15 20:58:21.448603] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.192 [2024-07-15 20:58:21.448625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.192 [2024-07-15 20:58:21.448750] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:54.192 [2024-07-15 20:58:21.448776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.192 #33 NEW cov: 12216 ft: 14770 corp: 29/1910b lim: 85 exec/s: 33 rss: 72Mb L: 83/85 MS: 1 InsertByte- 00:07:54.516 [2024-07-15 20:58:21.488695] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.516 [2024-07-15 20:58:21.488727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.516 [2024-07-15 20:58:21.488829] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.516 [2024-07-15 20:58:21.488858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.516 [2024-07-15 20:58:21.488973] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.516 [2024-07-15 20:58:21.488995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.516 [2024-07-15 20:58:21.489112] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:54.516 [2024-07-15 20:58:21.489135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.516 [2024-07-15 20:58:21.489258] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:07:54.516 [2024-07-15 20:58:21.489280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:54.516 #34 NEW cov: 12216 ft: 14785 corp: 30/1995b lim: 85 exec/s: 34 rss: 72Mb L: 85/85 MS: 1 CrossOver- 00:07:54.516 [2024-07-15 20:58:21.538618] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.516 [2024-07-15 20:58:21.538651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.516 [2024-07-15 20:58:21.538752] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.516 [2024-07-15 20:58:21.538775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.516 [2024-07-15 20:58:21.538888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.516 [2024-07-15 20:58:21.538911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.516 [2024-07-15 20:58:21.539030] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:54.516 [2024-07-15 20:58:21.539051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.516 #35 NEW cov: 12216 ft: 14812 corp: 31/2076b lim: 85 exec/s: 35 rss: 72Mb L: 81/85 MS: 1 CrossOver- 00:07:54.516 [2024-07-15 20:58:21.578452] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.516 [2024-07-15 20:58:21.578482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.516 [2024-07-15 20:58:21.578587] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.516 [2024-07-15 20:58:21.578613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.516 [2024-07-15 20:58:21.578730] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.516 [2024-07-15 20:58:21.578752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.516 #36 NEW cov: 12216 ft: 14830 corp: 32/2132b lim: 85 exec/s: 36 rss: 72Mb L: 56/85 MS: 1 CrossOver- 00:07:54.516 [2024-07-15 20:58:21.628901] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.516 [2024-07-15 20:58:21.628936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.516 [2024-07-15 20:58:21.629057] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.516 [2024-07-15 20:58:21.629078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.516 [2024-07-15 20:58:21.629203] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.516 [2024-07-15 20:58:21.629227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.516 [2024-07-15 20:58:21.629347] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:54.516 [2024-07-15 20:58:21.629374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.516 #37 NEW cov: 12216 ft: 14834 corp: 33/2214b lim: 85 exec/s: 37 rss: 72Mb L: 82/85 MS: 1 InsertByte- 00:07:54.516 [2024-07-15 20:58:21.679012] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.516 [2024-07-15 20:58:21.679044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.516 [2024-07-15 20:58:21.679128] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.516 [2024-07-15 20:58:21.679148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.516 [2024-07-15 20:58:21.679270] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.516 [2024-07-15 20:58:21.679293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.516 [2024-07-15 20:58:21.679412] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:54.516 [2024-07-15 20:58:21.679431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.516 #38 NEW cov: 12216 ft: 14838 corp: 34/2296b lim: 85 exec/s: 38 rss: 72Mb L: 82/85 MS: 1 InsertByte- 00:07:54.516 [2024-07-15 20:58:21.718747] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.517 [2024-07-15 20:58:21.718775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.517 [2024-07-15 20:58:21.718902] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.517 [2024-07-15 20:58:21.718927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.517 #39 NEW cov: 12216 ft: 14843 corp: 35/2342b lim: 85 exec/s: 39 rss: 72Mb L: 46/85 MS: 1 CrossOver- 00:07:54.517 [2024-07-15 20:58:21.769038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.517 [2024-07-15 20:58:21.769065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.517 [2024-07-15 20:58:21.769157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.517 [2024-07-15 20:58:21.769179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.517 [2024-07-15 20:58:21.769297] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.517 [2024-07-15 20:58:21.769322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.835 #40 NEW cov: 12216 ft: 14849 corp: 36/2398b lim: 85 exec/s: 40 rss: 72Mb L: 56/85 MS: 1 CopyPart- 00:07:54.835 [2024-07-15 20:58:21.818970] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.835 [2024-07-15 20:58:21.819003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.835 [2024-07-15 20:58:21.819113] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.835 [2024-07-15 20:58:21.819134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.835 #41 NEW cov: 12216 ft: 14871 corp: 37/2443b lim: 85 exec/s: 41 rss: 72Mb L: 45/85 MS: 1 ChangeBinInt- 00:07:54.835 [2024-07-15 20:58:21.859765] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.835 [2024-07-15 20:58:21.859796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.835 [2024-07-15 20:58:21.859886] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.835 [2024-07-15 20:58:21.859904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.835 [2024-07-15 20:58:21.860014] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.835 [2024-07-15 20:58:21.860032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.835 [2024-07-15 20:58:21.860142] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:54.835 [2024-07-15 20:58:21.860166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.835 [2024-07-15 20:58:21.860287] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:07:54.835 [2024-07-15 20:58:21.860310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:54.835 #42 NEW cov: 12216 ft: 14885 corp: 38/2528b lim: 85 exec/s: 42 rss: 72Mb L: 85/85 MS: 1 CrossOver- 00:07:54.835 [2024-07-15 20:58:21.909529] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:54.835 [2024-07-15 20:58:21.909564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.835 [2024-07-15 20:58:21.909672] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:54.835 [2024-07-15 20:58:21.909692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.835 [2024-07-15 20:58:21.909811] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:54.835 [2024-07-15 20:58:21.909834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.835 #43 NEW cov: 12216 ft: 14892 corp: 39/2588b lim: 85 exec/s: 21 rss: 72Mb L: 60/85 MS: 1 CrossOver- 00:07:54.835 #43 DONE cov: 12216 ft: 14892 corp: 39/2588b lim: 85 exec/s: 21 rss: 72Mb 00:07:54.835 ###### Recommended dictionary. ###### 00:07:54.835 "\000\000\000\177" # Uses: 1 00:07:54.835 "\005\000\000\000" # Uses: 0 00:07:54.835 "\013\231bz\345D+\000" # Uses: 1 00:07:54.835 ###### End of recommended dictionary. ###### 00:07:54.835 Done 43 runs in 2 second(s) 00:07:54.835 20:58:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:07:54.835 20:58:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:54.835 20:58:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:54.835 20:58:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:07:54.835 20:58:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:07:54.835 20:58:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:54.835 20:58:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:54.835 20:58:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:54.835 20:58:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:07:54.835 20:58:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:54.835 20:58:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:54.835 20:58:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:07:54.835 20:58:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:07:54.835 20:58:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:54.835 20:58:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:07:54.835 20:58:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:54.835 20:58:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:54.835 20:58:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:54.835 20:58:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:07:54.836 [2024-07-15 20:58:22.113320] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:54.836 [2024-07-15 20:58:22.113391] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid790958 ] 00:07:55.095 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.095 [2024-07-15 20:58:22.292841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.095 [2024-07-15 20:58:22.359386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.354 [2024-07-15 20:58:22.418697] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.354 [2024-07-15 20:58:22.434996] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:07:55.354 INFO: Running with entropic power schedule (0xFF, 100). 00:07:55.354 INFO: Seed: 690845988 00:07:55.354 INFO: Loaded 1 modules (358191 inline 8-bit counters): 358191 [0x29b254c, 0x2a09c7b), 00:07:55.354 INFO: Loaded 1 PC tables (358191 PCs): 358191 [0x2a09c80,0x2f80f70), 00:07:55.354 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:55.354 INFO: A corpus is not provided, starting from an empty corpus 00:07:55.354 #2 INITED exec/s: 0 rss: 64Mb 00:07:55.354 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:55.354 This may also happen if the target rejected all inputs we tried so far 00:07:55.354 [2024-07-15 20:58:22.480482] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:55.354 [2024-07-15 20:58:22.480513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.354 [2024-07-15 20:58:22.480559] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:55.354 [2024-07-15 20:58:22.480575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.354 [2024-07-15 20:58:22.480638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:55.354 [2024-07-15 20:58:22.480654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.354 [2024-07-15 20:58:22.480709] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:55.354 [2024-07-15 20:58:22.480724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:55.613 NEW_FUNC[1/696]: 0x4ae840 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:07:55.613 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:55.613 #7 NEW cov: 11904 ft: 11903 corp: 2/22b lim: 25 exec/s: 0 rss: 70Mb L: 21/21 MS: 5 ShuffleBytes-CopyPart-ChangeBit-ChangeByte-InsertRepeatedBytes- 00:07:55.613 [2024-07-15 20:58:22.811216] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:55.613 [2024-07-15 20:58:22.811250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.613 [2024-07-15 20:58:22.811292] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:55.613 [2024-07-15 20:58:22.811308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.613 [2024-07-15 20:58:22.811361] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:55.613 [2024-07-15 20:58:22.811377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.613 [2024-07-15 20:58:22.811435] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:55.613 [2024-07-15 20:58:22.811454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:55.613 NEW_FUNC[1/1]: 0x1d8e7e0 in _get_thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:332 00:07:55.613 #8 NEW cov: 12035 ft: 12421 corp: 3/43b lim: 25 exec/s: 0 rss: 70Mb L: 21/21 MS: 1 ChangeBinInt- 00:07:55.613 [2024-07-15 20:58:22.871319] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:55.613 [2024-07-15 20:58:22.871349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.613 [2024-07-15 20:58:22.871384] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:55.613 [2024-07-15 20:58:22.871400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.613 [2024-07-15 20:58:22.871457] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:55.614 [2024-07-15 20:58:22.871488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.614 [2024-07-15 20:58:22.871553] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:55.614 [2024-07-15 20:58:22.871568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:55.874 #9 NEW cov: 12041 ft: 12791 corp: 4/64b lim: 25 exec/s: 0 rss: 70Mb L: 21/21 MS: 1 CrossOver- 00:07:55.874 [2024-07-15 20:58:22.921662] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:55.874 [2024-07-15 20:58:22.921690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.874 [2024-07-15 20:58:22.921738] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:55.874 [2024-07-15 20:58:22.921754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.874 [2024-07-15 20:58:22.921806] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:55.874 [2024-07-15 20:58:22.921821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.874 [2024-07-15 20:58:22.921875] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:55.874 [2024-07-15 20:58:22.921891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:55.874 #10 NEW cov: 12126 ft: 13064 corp: 5/85b lim: 25 exec/s: 0 rss: 70Mb L: 21/21 MS: 1 ChangeBit- 00:07:55.874 [2024-07-15 20:58:22.961590] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:55.874 [2024-07-15 20:58:22.961617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.874 [2024-07-15 20:58:22.961665] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:55.874 [2024-07-15 20:58:22.961682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.874 [2024-07-15 20:58:22.961735] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:55.874 [2024-07-15 20:58:22.961750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.874 [2024-07-15 20:58:22.961804] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:55.874 [2024-07-15 20:58:22.961822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:55.874 #11 NEW cov: 12126 ft: 13105 corp: 6/107b lim: 25 exec/s: 0 rss: 71Mb L: 22/22 MS: 1 CopyPart- 00:07:55.874 [2024-07-15 20:58:23.001837] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:55.874 [2024-07-15 20:58:23.001864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.874 [2024-07-15 20:58:23.001919] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:55.874 [2024-07-15 20:58:23.001935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.874 [2024-07-15 20:58:23.001989] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:55.874 [2024-07-15 20:58:23.002006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.874 [2024-07-15 20:58:23.002061] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:55.874 [2024-07-15 20:58:23.002076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:55.874 [2024-07-15 20:58:23.002132] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:55.874 [2024-07-15 20:58:23.002147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:55.874 #12 NEW cov: 12126 ft: 13268 corp: 7/132b lim: 25 exec/s: 0 rss: 71Mb L: 25/25 MS: 1 CrossOver- 00:07:55.874 [2024-07-15 20:58:23.051814] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:55.874 [2024-07-15 20:58:23.051840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.874 [2024-07-15 20:58:23.051893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:55.874 [2024-07-15 20:58:23.051909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.874 [2024-07-15 20:58:23.051963] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:55.874 [2024-07-15 20:58:23.051979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.874 [2024-07-15 20:58:23.052035] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:55.874 [2024-07-15 20:58:23.052050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:55.874 #13 NEW cov: 12126 ft: 13331 corp: 8/153b lim: 25 exec/s: 0 rss: 71Mb L: 21/25 MS: 1 ChangeBinInt- 00:07:55.874 [2024-07-15 20:58:23.092055] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:55.874 [2024-07-15 20:58:23.092082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.874 [2024-07-15 20:58:23.092137] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:55.874 [2024-07-15 20:58:23.092152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.874 [2024-07-15 20:58:23.092205] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:55.874 [2024-07-15 20:58:23.092221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.874 [2024-07-15 20:58:23.092275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:55.874 [2024-07-15 20:58:23.092309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:55.874 [2024-07-15 20:58:23.092366] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:55.874 [2024-07-15 20:58:23.092382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:55.874 #14 NEW cov: 12126 ft: 13352 corp: 9/178b lim: 25 exec/s: 0 rss: 71Mb L: 25/25 MS: 1 ShuffleBytes- 00:07:55.874 [2024-07-15 20:58:23.142053] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:55.874 [2024-07-15 20:58:23.142079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.874 [2024-07-15 20:58:23.142133] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:55.874 [2024-07-15 20:58:23.142149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.874 [2024-07-15 20:58:23.142202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:55.874 [2024-07-15 20:58:23.142217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.874 [2024-07-15 20:58:23.142271] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:55.874 [2024-07-15 20:58:23.142286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:55.874 #15 NEW cov: 12126 ft: 13416 corp: 10/199b lim: 25 exec/s: 0 rss: 71Mb L: 21/25 MS: 1 ChangeByte- 00:07:56.135 [2024-07-15 20:58:23.181961] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:56.135 [2024-07-15 20:58:23.181987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.135 [2024-07-15 20:58:23.182028] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:56.135 [2024-07-15 20:58:23.182043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.135 #16 NEW cov: 12126 ft: 13966 corp: 11/213b lim: 25 exec/s: 0 rss: 71Mb L: 14/25 MS: 1 CrossOver- 00:07:56.135 [2024-07-15 20:58:23.222328] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:56.136 [2024-07-15 20:58:23.222355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.136 [2024-07-15 20:58:23.222408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:56.136 [2024-07-15 20:58:23.222423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.136 [2024-07-15 20:58:23.222481] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:56.136 [2024-07-15 20:58:23.222496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.136 [2024-07-15 20:58:23.222553] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:56.136 [2024-07-15 20:58:23.222569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.136 #17 NEW cov: 12126 ft: 13994 corp: 12/235b lim: 25 exec/s: 0 rss: 71Mb L: 22/25 MS: 1 CopyPart- 00:07:56.136 [2024-07-15 20:58:23.272580] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:56.136 [2024-07-15 20:58:23.272607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.136 [2024-07-15 20:58:23.272658] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:56.136 [2024-07-15 20:58:23.272674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.136 [2024-07-15 20:58:23.272726] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:56.136 [2024-07-15 20:58:23.272741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.136 [2024-07-15 20:58:23.272795] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:56.136 [2024-07-15 20:58:23.272810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.136 [2024-07-15 20:58:23.272865] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:56.136 [2024-07-15 20:58:23.272882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:56.136 #18 NEW cov: 12126 ft: 14043 corp: 13/260b lim: 25 exec/s: 0 rss: 71Mb L: 25/25 MS: 1 CopyPart- 00:07:56.136 [2024-07-15 20:58:23.312552] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:56.136 [2024-07-15 20:58:23.312579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.136 [2024-07-15 20:58:23.312632] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:56.136 [2024-07-15 20:58:23.312649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.136 [2024-07-15 20:58:23.312702] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:56.136 [2024-07-15 20:58:23.312718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.136 [2024-07-15 20:58:23.312773] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:56.136 [2024-07-15 20:58:23.312789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.136 #19 NEW cov: 12126 ft: 14104 corp: 14/281b lim: 25 exec/s: 0 rss: 71Mb L: 21/25 MS: 1 ShuffleBytes- 00:07:56.136 [2024-07-15 20:58:23.362697] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:56.136 [2024-07-15 20:58:23.362724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.136 [2024-07-15 20:58:23.362788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:56.136 [2024-07-15 20:58:23.362804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.136 [2024-07-15 20:58:23.362858] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:56.136 [2024-07-15 20:58:23.362873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.136 [2024-07-15 20:58:23.362927] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:56.136 [2024-07-15 20:58:23.362943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.136 NEW_FUNC[1/1]: 0x1a7d240 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:56.136 #20 NEW cov: 12149 ft: 14138 corp: 15/302b lim: 25 exec/s: 0 rss: 71Mb L: 21/25 MS: 1 ChangeByte- 00:07:56.136 [2024-07-15 20:58:23.402805] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:56.136 [2024-07-15 20:58:23.402836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.136 [2024-07-15 20:58:23.402876] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:56.136 [2024-07-15 20:58:23.402891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.136 [2024-07-15 20:58:23.402944] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:56.136 [2024-07-15 20:58:23.402959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.136 [2024-07-15 20:58:23.403014] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:56.136 [2024-07-15 20:58:23.403033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.136 #21 NEW cov: 12149 ft: 14164 corp: 16/323b lim: 25 exec/s: 0 rss: 71Mb L: 21/25 MS: 1 ChangeByte- 00:07:56.395 [2024-07-15 20:58:23.442937] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:56.395 [2024-07-15 20:58:23.442965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.395 [2024-07-15 20:58:23.443006] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:56.395 [2024-07-15 20:58:23.443020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.395 [2024-07-15 20:58:23.443074] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:56.395 [2024-07-15 20:58:23.443090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.395 [2024-07-15 20:58:23.443145] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:56.395 [2024-07-15 20:58:23.443161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.395 #22 NEW cov: 12149 ft: 14191 corp: 17/344b lim: 25 exec/s: 22 rss: 71Mb L: 21/25 MS: 1 ShuffleBytes- 00:07:56.395 [2024-07-15 20:58:23.493062] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:56.395 [2024-07-15 20:58:23.493088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.395 [2024-07-15 20:58:23.493140] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:56.395 [2024-07-15 20:58:23.493156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.395 [2024-07-15 20:58:23.493209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:56.396 [2024-07-15 20:58:23.493223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.396 [2024-07-15 20:58:23.493278] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:56.396 [2024-07-15 20:58:23.493294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.396 #23 NEW cov: 12149 ft: 14216 corp: 18/367b lim: 25 exec/s: 23 rss: 71Mb L: 23/25 MS: 1 CopyPart- 00:07:56.396 [2024-07-15 20:58:23.533191] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:56.396 [2024-07-15 20:58:23.533218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.396 [2024-07-15 20:58:23.533268] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:56.396 [2024-07-15 20:58:23.533286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.396 [2024-07-15 20:58:23.533340] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:56.396 [2024-07-15 20:58:23.533355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.396 [2024-07-15 20:58:23.533408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:56.396 [2024-07-15 20:58:23.533423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.396 #24 NEW cov: 12149 ft: 14234 corp: 19/391b lim: 25 exec/s: 24 rss: 71Mb L: 24/25 MS: 1 InsertRepeatedBytes- 00:07:56.396 [2024-07-15 20:58:23.583326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:56.396 [2024-07-15 20:58:23.583353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.396 [2024-07-15 20:58:23.583400] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:56.396 [2024-07-15 20:58:23.583415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.396 [2024-07-15 20:58:23.583470] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:56.396 [2024-07-15 20:58:23.583485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.396 [2024-07-15 20:58:23.583538] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:56.396 [2024-07-15 20:58:23.583554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.396 #25 NEW cov: 12149 ft: 14254 corp: 20/413b lim: 25 exec/s: 25 rss: 71Mb L: 22/25 MS: 1 InsertByte- 00:07:56.396 [2024-07-15 20:58:23.623311] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:56.396 [2024-07-15 20:58:23.623338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.396 [2024-07-15 20:58:23.623399] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:56.396 [2024-07-15 20:58:23.623416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.396 [2024-07-15 20:58:23.623474] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:56.396 [2024-07-15 20:58:23.623491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.396 #26 NEW cov: 12149 ft: 14466 corp: 21/429b lim: 25 exec/s: 26 rss: 71Mb L: 16/25 MS: 1 EraseBytes- 00:07:56.396 [2024-07-15 20:58:23.663682] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:56.396 [2024-07-15 20:58:23.663708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.396 [2024-07-15 20:58:23.663761] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:56.396 [2024-07-15 20:58:23.663776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.396 [2024-07-15 20:58:23.663831] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:56.396 [2024-07-15 20:58:23.663846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.396 [2024-07-15 20:58:23.663900] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:56.396 [2024-07-15 20:58:23.663917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.396 [2024-07-15 20:58:23.663976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:56.396 [2024-07-15 20:58:23.663992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:56.396 #27 NEW cov: 12149 ft: 14479 corp: 22/454b lim: 25 exec/s: 27 rss: 71Mb L: 25/25 MS: 1 ChangeBit- 00:07:56.655 [2024-07-15 20:58:23.703701] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:56.655 [2024-07-15 20:58:23.703729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.655 [2024-07-15 20:58:23.703776] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:56.655 [2024-07-15 20:58:23.703791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.655 [2024-07-15 20:58:23.703844] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:56.655 [2024-07-15 20:58:23.703861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.655 [2024-07-15 20:58:23.703914] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:56.655 [2024-07-15 20:58:23.703931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.655 #28 NEW cov: 12149 ft: 14534 corp: 23/476b lim: 25 exec/s: 28 rss: 71Mb L: 22/25 MS: 1 CopyPart- 00:07:56.655 [2024-07-15 20:58:23.743755] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:56.655 [2024-07-15 20:58:23.743781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.655 [2024-07-15 20:58:23.743833] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:56.655 [2024-07-15 20:58:23.743850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.655 [2024-07-15 20:58:23.743904] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:56.655 [2024-07-15 20:58:23.743920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.655 [2024-07-15 20:58:23.743974] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:56.655 [2024-07-15 20:58:23.743989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.655 #29 NEW cov: 12149 ft: 14552 corp: 24/497b lim: 25 exec/s: 29 rss: 71Mb L: 21/25 MS: 1 CrossOver- 00:07:56.655 [2024-07-15 20:58:23.784012] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:56.655 [2024-07-15 20:58:23.784039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.655 [2024-07-15 20:58:23.784093] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:56.655 [2024-07-15 20:58:23.784109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.655 [2024-07-15 20:58:23.784161] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:56.655 [2024-07-15 20:58:23.784177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.655 [2024-07-15 20:58:23.784233] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:56.655 [2024-07-15 20:58:23.784248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.655 [2024-07-15 20:58:23.784304] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:56.655 [2024-07-15 20:58:23.784320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:56.655 #30 NEW cov: 12149 ft: 14585 corp: 25/522b lim: 25 exec/s: 30 rss: 71Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:07:56.655 [2024-07-15 20:58:23.834021] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:56.655 [2024-07-15 20:58:23.834048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.655 [2024-07-15 20:58:23.834099] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:56.655 [2024-07-15 20:58:23.834115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.655 [2024-07-15 20:58:23.834168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:56.655 [2024-07-15 20:58:23.834183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.655 [2024-07-15 20:58:23.834237] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:56.655 [2024-07-15 20:58:23.834253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.655 #31 NEW cov: 12149 ft: 14605 corp: 26/545b lim: 25 exec/s: 31 rss: 72Mb L: 23/25 MS: 1 ChangeBit- 00:07:56.655 [2024-07-15 20:58:23.884162] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:56.655 [2024-07-15 20:58:23.884188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.655 [2024-07-15 20:58:23.884237] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:56.655 [2024-07-15 20:58:23.884253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.655 [2024-07-15 20:58:23.884305] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:56.655 [2024-07-15 20:58:23.884320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.655 [2024-07-15 20:58:23.884377] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:56.655 [2024-07-15 20:58:23.884393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.655 #32 NEW cov: 12149 ft: 14622 corp: 27/566b lim: 25 exec/s: 32 rss: 72Mb L: 21/25 MS: 1 ShuffleBytes- 00:07:56.655 [2024-07-15 20:58:23.924308] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:56.655 [2024-07-15 20:58:23.924335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.655 [2024-07-15 20:58:23.924383] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:56.656 [2024-07-15 20:58:23.924399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.656 [2024-07-15 20:58:23.924455] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:56.656 [2024-07-15 20:58:23.924486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.656 [2024-07-15 20:58:23.924546] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:56.656 [2024-07-15 20:58:23.924562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.656 #33 NEW cov: 12149 ft: 14628 corp: 28/587b lim: 25 exec/s: 33 rss: 72Mb L: 21/25 MS: 1 CrossOver- 00:07:56.915 [2024-07-15 20:58:23.964409] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:56.915 [2024-07-15 20:58:23.964436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.915 [2024-07-15 20:58:23.964510] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:56.915 [2024-07-15 20:58:23.964536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.915 [2024-07-15 20:58:23.964590] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:56.915 [2024-07-15 20:58:23.964606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.915 [2024-07-15 20:58:23.964658] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:56.915 [2024-07-15 20:58:23.964675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.915 #34 NEW cov: 12149 ft: 14651 corp: 29/609b lim: 25 exec/s: 34 rss: 72Mb L: 22/25 MS: 1 ChangeASCIIInt- 00:07:56.915 [2024-07-15 20:58:24.014563] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:56.915 [2024-07-15 20:58:24.014589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.915 [2024-07-15 20:58:24.014642] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:56.915 [2024-07-15 20:58:24.014658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.915 [2024-07-15 20:58:24.014711] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:56.915 [2024-07-15 20:58:24.014726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.915 [2024-07-15 20:58:24.014780] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:56.915 [2024-07-15 20:58:24.014794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.915 #35 NEW cov: 12149 ft: 14695 corp: 30/631b lim: 25 exec/s: 35 rss: 72Mb L: 22/25 MS: 1 CopyPart- 00:07:56.915 [2024-07-15 20:58:24.054660] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:56.915 [2024-07-15 20:58:24.054686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.915 [2024-07-15 20:58:24.054739] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:56.915 [2024-07-15 20:58:24.054755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.915 [2024-07-15 20:58:24.054806] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:56.915 [2024-07-15 20:58:24.054822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.915 [2024-07-15 20:58:24.054875] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:56.915 [2024-07-15 20:58:24.054891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.915 #36 NEW cov: 12149 ft: 14697 corp: 31/655b lim: 25 exec/s: 36 rss: 72Mb L: 24/25 MS: 1 InsertRepeatedBytes- 00:07:56.915 [2024-07-15 20:58:24.104817] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:56.915 [2024-07-15 20:58:24.104844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.915 [2024-07-15 20:58:24.104893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:56.915 [2024-07-15 20:58:24.104909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.915 [2024-07-15 20:58:24.104965] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:56.915 [2024-07-15 20:58:24.104981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.915 [2024-07-15 20:58:24.105034] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:56.915 [2024-07-15 20:58:24.105050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.915 #37 NEW cov: 12149 ft: 14703 corp: 32/677b lim: 25 exec/s: 37 rss: 72Mb L: 22/25 MS: 1 CopyPart- 00:07:56.915 [2024-07-15 20:58:24.155059] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:56.915 [2024-07-15 20:58:24.155086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.915 [2024-07-15 20:58:24.155145] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:56.915 [2024-07-15 20:58:24.155159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.915 [2024-07-15 20:58:24.155213] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:56.915 [2024-07-15 20:58:24.155229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.915 [2024-07-15 20:58:24.155282] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:56.915 [2024-07-15 20:58:24.155297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.915 [2024-07-15 20:58:24.155354] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:56.915 [2024-07-15 20:58:24.155369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:56.915 #38 NEW cov: 12149 ft: 14717 corp: 33/702b lim: 25 exec/s: 38 rss: 72Mb L: 25/25 MS: 1 ChangeBinInt- 00:07:56.915 [2024-07-15 20:58:24.195191] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:56.915 [2024-07-15 20:58:24.195217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.915 [2024-07-15 20:58:24.195272] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:56.915 [2024-07-15 20:58:24.195288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.915 [2024-07-15 20:58:24.195343] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:56.915 [2024-07-15 20:58:24.195360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.915 [2024-07-15 20:58:24.195414] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:56.915 [2024-07-15 20:58:24.195429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.915 [2024-07-15 20:58:24.195490] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:56.915 [2024-07-15 20:58:24.195506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:57.175 #39 NEW cov: 12149 ft: 14728 corp: 34/727b lim: 25 exec/s: 39 rss: 72Mb L: 25/25 MS: 1 ChangeBit- 00:07:57.175 [2024-07-15 20:58:24.235308] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:57.175 [2024-07-15 20:58:24.235335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.175 [2024-07-15 20:58:24.235390] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:57.175 [2024-07-15 20:58:24.235406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.175 [2024-07-15 20:58:24.235461] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:57.175 [2024-07-15 20:58:24.235493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.175 [2024-07-15 20:58:24.235546] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:57.175 [2024-07-15 20:58:24.235564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.175 [2024-07-15 20:58:24.235621] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:57.175 [2024-07-15 20:58:24.235638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:57.175 #40 NEW cov: 12149 ft: 14756 corp: 35/752b lim: 25 exec/s: 40 rss: 72Mb L: 25/25 MS: 1 ShuffleBytes- 00:07:57.175 [2024-07-15 20:58:24.285458] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:57.175 [2024-07-15 20:58:24.285485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.175 [2024-07-15 20:58:24.285553] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:57.175 [2024-07-15 20:58:24.285567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.175 [2024-07-15 20:58:24.285620] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:57.176 [2024-07-15 20:58:24.285636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.176 [2024-07-15 20:58:24.285688] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:57.176 [2024-07-15 20:58:24.285704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.176 [2024-07-15 20:58:24.285759] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:57.176 [2024-07-15 20:58:24.285775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:57.176 #41 NEW cov: 12149 ft: 14764 corp: 36/777b lim: 25 exec/s: 41 rss: 72Mb L: 25/25 MS: 1 ChangeBinInt- 00:07:57.176 [2024-07-15 20:58:24.335472] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:57.176 [2024-07-15 20:58:24.335499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.176 [2024-07-15 20:58:24.335551] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:57.176 [2024-07-15 20:58:24.335566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.176 [2024-07-15 20:58:24.335624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:57.176 [2024-07-15 20:58:24.335640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.176 [2024-07-15 20:58:24.335692] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:57.176 [2024-07-15 20:58:24.335708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.176 #42 NEW cov: 12149 ft: 14768 corp: 37/799b lim: 25 exec/s: 42 rss: 72Mb L: 22/25 MS: 1 ChangeBit- 00:07:57.176 [2024-07-15 20:58:24.385495] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:57.176 [2024-07-15 20:58:24.385523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.176 [2024-07-15 20:58:24.385572] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:57.176 [2024-07-15 20:58:24.385588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.176 [2024-07-15 20:58:24.385643] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:57.176 [2024-07-15 20:58:24.385659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.176 #43 NEW cov: 12149 ft: 14776 corp: 38/816b lim: 25 exec/s: 43 rss: 72Mb L: 17/25 MS: 1 InsertRepeatedBytes- 00:07:57.176 [2024-07-15 20:58:24.435748] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:57.176 [2024-07-15 20:58:24.435774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.176 [2024-07-15 20:58:24.435842] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:57.176 [2024-07-15 20:58:24.435858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.176 [2024-07-15 20:58:24.435912] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:57.176 [2024-07-15 20:58:24.435928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.176 [2024-07-15 20:58:24.435985] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:57.176 [2024-07-15 20:58:24.435999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.435 #44 NEW cov: 12149 ft: 14787 corp: 39/838b lim: 25 exec/s: 22 rss: 72Mb L: 22/25 MS: 1 ChangeBit- 00:07:57.435 #44 DONE cov: 12149 ft: 14787 corp: 39/838b lim: 25 exec/s: 22 rss: 72Mb 00:07:57.435 Done 44 runs in 2 second(s) 00:07:57.435 20:58:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:07:57.435 20:58:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:57.435 20:58:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:57.435 20:58:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:07:57.435 20:58:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:07:57.435 20:58:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:57.435 20:58:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:57.435 20:58:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:57.435 20:58:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:07:57.435 20:58:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:57.435 20:58:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:57.435 20:58:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:07:57.435 20:58:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:07:57.435 20:58:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:57.435 20:58:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:07:57.435 20:58:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:57.435 20:58:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:57.435 20:58:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:57.435 20:58:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:07:57.435 [2024-07-15 20:58:24.639055] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:57.435 [2024-07-15 20:58:24.639123] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid791369 ] 00:07:57.435 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.694 [2024-07-15 20:58:24.818166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.694 [2024-07-15 20:58:24.884361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.694 [2024-07-15 20:58:24.943536] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:57.694 [2024-07-15 20:58:24.959840] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:07:57.694 INFO: Running with entropic power schedule (0xFF, 100). 00:07:57.694 INFO: Seed: 3216828079 00:07:57.953 INFO: Loaded 1 modules (358191 inline 8-bit counters): 358191 [0x29b254c, 0x2a09c7b), 00:07:57.953 INFO: Loaded 1 PC tables (358191 PCs): 358191 [0x2a09c80,0x2f80f70), 00:07:57.953 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:57.953 INFO: A corpus is not provided, starting from an empty corpus 00:07:57.953 #2 INITED exec/s: 0 rss: 64Mb 00:07:57.953 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:57.953 This may also happen if the target rejected all inputs we tried so far 00:07:57.953 [2024-07-15 20:58:25.035825] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069683019775 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.953 [2024-07-15 20:58:25.035859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.953 [2024-07-15 20:58:25.035984] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.953 [2024-07-15 20:58:25.036007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.212 NEW_FUNC[1/698]: 0x4af920 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:07:58.212 NEW_FUNC[2/698]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:58.212 #14 NEW cov: 11977 ft: 11978 corp: 2/46b lim: 100 exec/s: 0 rss: 70Mb L: 45/45 MS: 2 ChangeBinInt-InsertRepeatedBytes- 00:07:58.212 [2024-07-15 20:58:25.376677] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13599952494682487996 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.212 [2024-07-15 20:58:25.376731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.212 #16 NEW cov: 12107 ft: 13455 corp: 3/74b lim: 100 exec/s: 0 rss: 70Mb L: 28/45 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:58.212 [2024-07-15 20:58:25.437015] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069683019775 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.212 [2024-07-15 20:58:25.437043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.212 [2024-07-15 20:58:25.437165] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.212 [2024-07-15 20:58:25.437190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.212 #17 NEW cov: 12113 ft: 13641 corp: 4/119b lim: 100 exec/s: 0 rss: 70Mb L: 45/45 MS: 1 ChangeBinInt- 00:07:58.212 [2024-07-15 20:58:25.497259] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.212 [2024-07-15 20:58:25.497285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.212 [2024-07-15 20:58:25.497410] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.212 [2024-07-15 20:58:25.497434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.471 #18 NEW cov: 12198 ft: 13919 corp: 5/163b lim: 100 exec/s: 0 rss: 70Mb L: 44/45 MS: 1 CrossOver- 00:07:58.471 [2024-07-15 20:58:25.547116] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.471 [2024-07-15 20:58:25.547141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.471 #27 NEW cov: 12198 ft: 14051 corp: 6/199b lim: 100 exec/s: 0 rss: 70Mb L: 36/45 MS: 4 CrossOver-EraseBytes-CopyPart-InsertRepeatedBytes- 00:07:58.471 [2024-07-15 20:58:25.597775] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069683019775 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.471 [2024-07-15 20:58:25.597807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.471 [2024-07-15 20:58:25.597899] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.471 [2024-07-15 20:58:25.597926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.471 [2024-07-15 20:58:25.598047] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.471 [2024-07-15 20:58:25.598071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.471 #28 NEW cov: 12198 ft: 14424 corp: 7/277b lim: 100 exec/s: 0 rss: 71Mb L: 78/78 MS: 1 CopyPart- 00:07:58.471 [2024-07-15 20:58:25.647918] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1953184666997168923 len:6940 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.471 [2024-07-15 20:58:25.647953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.471 [2024-07-15 20:58:25.648056] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1953184666628070171 len:6940 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.471 [2024-07-15 20:58:25.648081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.471 [2024-07-15 20:58:25.648209] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:1953184666628070171 len:6940 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.471 [2024-07-15 20:58:25.648234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.471 #31 NEW cov: 12198 ft: 14463 corp: 8/347b lim: 100 exec/s: 0 rss: 71Mb L: 70/78 MS: 3 ShuffleBytes-ChangeByte-InsertRepeatedBytes- 00:07:58.471 [2024-07-15 20:58:25.697554] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069683019775 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.471 [2024-07-15 20:58:25.697591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.471 #37 NEW cov: 12198 ft: 14493 corp: 9/382b lim: 100 exec/s: 0 rss: 71Mb L: 35/78 MS: 1 EraseBytes- 00:07:58.472 [2024-07-15 20:58:25.747756] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069683019775 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.472 [2024-07-15 20:58:25.747782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.731 #38 NEW cov: 12198 ft: 14523 corp: 10/417b lim: 100 exec/s: 0 rss: 71Mb L: 35/78 MS: 1 ChangeBinInt- 00:07:58.731 [2024-07-15 20:58:25.807965] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069683019775 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.731 [2024-07-15 20:58:25.807991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.731 #39 NEW cov: 12198 ft: 14572 corp: 11/452b lim: 100 exec/s: 0 rss: 71Mb L: 35/78 MS: 1 ShuffleBytes- 00:07:58.731 [2024-07-15 20:58:25.847767] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.731 [2024-07-15 20:58:25.847798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.731 #40 NEW cov: 12198 ft: 14668 corp: 12/488b lim: 100 exec/s: 0 rss: 71Mb L: 36/78 MS: 1 ChangeByte- 00:07:58.731 [2024-07-15 20:58:25.897804] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.731 [2024-07-15 20:58:25.897835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.731 NEW_FUNC[1/1]: 0x1a7d240 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:58.731 #41 NEW cov: 12221 ft: 14799 corp: 13/524b lim: 100 exec/s: 0 rss: 71Mb L: 36/78 MS: 1 ChangeBit- 00:07:58.731 [2024-07-15 20:58:25.948917] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069683019775 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.731 [2024-07-15 20:58:25.948946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.731 [2024-07-15 20:58:25.949073] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.731 [2024-07-15 20:58:25.949096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.731 [2024-07-15 20:58:25.949223] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.731 [2024-07-15 20:58:25.949249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.731 #42 NEW cov: 12221 ft: 14893 corp: 14/603b lim: 100 exec/s: 0 rss: 71Mb L: 79/79 MS: 1 InsertByte- 00:07:58.731 [2024-07-15 20:58:26.018563] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.731 [2024-07-15 20:58:26.018598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.991 #53 NEW cov: 12221 ft: 14905 corp: 15/640b lim: 100 exec/s: 53 rss: 71Mb L: 37/79 MS: 1 InsertByte- 00:07:58.991 [2024-07-15 20:58:26.079219] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1953184666997168923 len:6940 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.991 [2024-07-15 20:58:26.079252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.991 [2024-07-15 20:58:26.079382] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1953184666628070171 len:6940 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.991 [2024-07-15 20:58:26.079407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.991 [2024-07-15 20:58:26.079537] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:2006946387179805467 len:6940 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.991 [2024-07-15 20:58:26.079562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.991 #59 NEW cov: 12221 ft: 14933 corp: 16/711b lim: 100 exec/s: 59 rss: 71Mb L: 71/79 MS: 1 InsertByte- 00:07:58.991 [2024-07-15 20:58:26.139694] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069683019775 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.991 [2024-07-15 20:58:26.139728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.991 [2024-07-15 20:58:26.139823] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.991 [2024-07-15 20:58:26.139848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.991 [2024-07-15 20:58:26.139973] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.991 [2024-07-15 20:58:26.139995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.991 [2024-07-15 20:58:26.140123] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.991 [2024-07-15 20:58:26.140151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:58.991 #60 NEW cov: 12221 ft: 15314 corp: 17/804b lim: 100 exec/s: 60 rss: 71Mb L: 93/93 MS: 1 InsertRepeatedBytes- 00:07:58.991 [2024-07-15 20:58:26.189571] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069683019775 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.991 [2024-07-15 20:58:26.189606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.991 [2024-07-15 20:58:26.189723] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.991 [2024-07-15 20:58:26.189748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.991 [2024-07-15 20:58:26.189867] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.991 [2024-07-15 20:58:26.189897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.991 #61 NEW cov: 12221 ft: 15328 corp: 18/883b lim: 100 exec/s: 61 rss: 71Mb L: 79/93 MS: 1 InsertByte- 00:07:58.991 [2024-07-15 20:58:26.239271] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.991 [2024-07-15 20:58:26.239298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.991 #62 NEW cov: 12221 ft: 15345 corp: 19/919b lim: 100 exec/s: 62 rss: 71Mb L: 36/93 MS: 1 CopyPart- 00:07:59.250 [2024-07-15 20:58:26.290214] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069683019775 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.250 [2024-07-15 20:58:26.290247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.250 [2024-07-15 20:58:26.290322] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.250 [2024-07-15 20:58:26.290345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.250 [2024-07-15 20:58:26.290467] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.250 [2024-07-15 20:58:26.290491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.250 [2024-07-15 20:58:26.290619] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551404 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.250 [2024-07-15 20:58:26.290642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:59.250 #63 NEW cov: 12221 ft: 15353 corp: 20/999b lim: 100 exec/s: 63 rss: 71Mb L: 80/93 MS: 1 InsertByte- 00:07:59.250 [2024-07-15 20:58:26.339965] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069683019775 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.250 [2024-07-15 20:58:26.339998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.250 [2024-07-15 20:58:26.340123] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.250 [2024-07-15 20:58:26.340154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.250 [2024-07-15 20:58:26.340274] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446743377924849663 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.250 [2024-07-15 20:58:26.340297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.250 #64 NEW cov: 12221 ft: 15408 corp: 21/1078b lim: 100 exec/s: 64 rss: 71Mb L: 79/93 MS: 1 ChangeByte- 00:07:59.250 [2024-07-15 20:58:26.399737] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069683019775 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.250 [2024-07-15 20:58:26.399762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.251 #65 NEW cov: 12221 ft: 15567 corp: 22/1101b lim: 100 exec/s: 65 rss: 72Mb L: 23/93 MS: 1 EraseBytes- 00:07:59.251 [2024-07-15 20:58:26.460154] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069600182271 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.251 [2024-07-15 20:58:26.460189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.251 [2024-07-15 20:58:26.460307] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.251 [2024-07-15 20:58:26.460329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.251 #67 NEW cov: 12221 ft: 15570 corp: 23/1142b lim: 100 exec/s: 67 rss: 72Mb L: 41/93 MS: 2 ChangeBit-CrossOver- 00:07:59.251 [2024-07-15 20:58:26.510029] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.251 [2024-07-15 20:58:26.510055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.251 #68 NEW cov: 12221 ft: 15635 corp: 24/1167b lim: 100 exec/s: 68 rss: 72Mb L: 25/93 MS: 1 EraseBytes- 00:07:59.510 [2024-07-15 20:58:26.560948] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069683019775 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.510 [2024-07-15 20:58:26.560979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.510 [2024-07-15 20:58:26.561076] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.510 [2024-07-15 20:58:26.561106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.510 [2024-07-15 20:58:26.561219] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.510 [2024-07-15 20:58:26.561245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.510 [2024-07-15 20:58:26.561367] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.510 [2024-07-15 20:58:26.561392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:59.510 #69 NEW cov: 12221 ft: 15649 corp: 25/1264b lim: 100 exec/s: 69 rss: 72Mb L: 97/97 MS: 1 InsertRepeatedBytes- 00:07:59.510 [2024-07-15 20:58:26.600823] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069683019775 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.510 [2024-07-15 20:58:26.600856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.510 [2024-07-15 20:58:26.600973] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.510 [2024-07-15 20:58:26.600996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.510 [2024-07-15 20:58:26.601125] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.510 [2024-07-15 20:58:26.601150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.510 #70 NEW cov: 12221 ft: 15668 corp: 26/1343b lim: 100 exec/s: 70 rss: 72Mb L: 79/97 MS: 1 CrossOver- 00:07:59.510 [2024-07-15 20:58:26.650514] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.510 [2024-07-15 20:58:26.650540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.510 #71 NEW cov: 12221 ft: 15704 corp: 27/1380b lim: 100 exec/s: 71 rss: 72Mb L: 37/97 MS: 1 InsertByte- 00:07:59.510 [2024-07-15 20:58:26.690926] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069683019775 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.510 [2024-07-15 20:58:26.690958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.510 [2024-07-15 20:58:26.691079] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.510 [2024-07-15 20:58:26.691107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.510 #72 NEW cov: 12221 ft: 15768 corp: 28/1425b lim: 100 exec/s: 72 rss: 72Mb L: 45/97 MS: 1 ShuffleBytes- 00:07:59.510 [2024-07-15 20:58:26.741456] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069683019775 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.510 [2024-07-15 20:58:26.741482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.510 [2024-07-15 20:58:26.741581] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.510 [2024-07-15 20:58:26.741603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.510 [2024-07-15 20:58:26.741718] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.510 [2024-07-15 20:58:26.741743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.510 [2024-07-15 20:58:26.741866] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.510 [2024-07-15 20:58:26.741892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:59.510 #73 NEW cov: 12221 ft: 15802 corp: 29/1509b lim: 100 exec/s: 73 rss: 72Mb L: 84/97 MS: 1 CrossOver- 00:07:59.510 [2024-07-15 20:58:26.791729] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069683019775 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.510 [2024-07-15 20:58:26.791762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.510 [2024-07-15 20:58:26.791853] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.510 [2024-07-15 20:58:26.791876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.511 [2024-07-15 20:58:26.792000] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.511 [2024-07-15 20:58:26.792028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.511 [2024-07-15 20:58:26.792146] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.511 [2024-07-15 20:58:26.792170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:59.770 #74 NEW cov: 12221 ft: 15811 corp: 30/1606b lim: 100 exec/s: 74 rss: 72Mb L: 97/97 MS: 1 ChangeBit- 00:07:59.770 [2024-07-15 20:58:26.851611] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.770 [2024-07-15 20:58:26.851644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.770 [2024-07-15 20:58:26.851741] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.770 [2024-07-15 20:58:26.851764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.770 [2024-07-15 20:58:26.851879] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.770 [2024-07-15 20:58:26.851904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.770 #75 NEW cov: 12221 ft: 15828 corp: 31/1666b lim: 100 exec/s: 75 rss: 72Mb L: 60/97 MS: 1 CrossOver- 00:07:59.770 [2024-07-15 20:58:26.912100] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069683019775 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.770 [2024-07-15 20:58:26.912131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.770 [2024-07-15 20:58:26.912247] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.770 [2024-07-15 20:58:26.912270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.770 [2024-07-15 20:58:26.912386] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.770 [2024-07-15 20:58:26.912410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.770 [2024-07-15 20:58:26.912531] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:2738188573441261567 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.770 [2024-07-15 20:58:26.912556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:59.770 #76 NEW cov: 12221 ft: 15870 corp: 32/1746b lim: 100 exec/s: 76 rss: 72Mb L: 80/97 MS: 1 InsertByte- 00:07:59.770 [2024-07-15 20:58:26.971748] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583405055 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.770 [2024-07-15 20:58:26.971778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.770 [2024-07-15 20:58:26.971884] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.770 [2024-07-15 20:58:26.971909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.770 #77 NEW cov: 12221 ft: 15897 corp: 33/1790b lim: 100 exec/s: 77 rss: 72Mb L: 44/97 MS: 1 CrossOver- 00:07:59.770 [2024-07-15 20:58:27.011260] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069683019775 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.770 [2024-07-15 20:58:27.011292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.770 #78 NEW cov: 12221 ft: 15927 corp: 34/1813b lim: 100 exec/s: 39 rss: 72Mb L: 23/97 MS: 1 ChangeBit- 00:07:59.770 #78 DONE cov: 12221 ft: 15927 corp: 34/1813b lim: 100 exec/s: 39 rss: 72Mb 00:07:59.770 Done 78 runs in 2 second(s) 00:08:00.029 20:58:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:08:00.029 20:58:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:00.029 20:58:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:00.029 20:58:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:08:00.029 00:08:00.029 real 1m4.614s 00:08:00.029 user 1m40.826s 00:08:00.029 sys 0m7.303s 00:08:00.029 20:58:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.029 20:58:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:00.029 ************************************ 00:08:00.029 END TEST nvmf_llvm_fuzz 00:08:00.029 ************************************ 00:08:00.029 20:58:27 llvm_fuzz -- common/autotest_common.sh@1142 -- # return 0 00:08:00.029 20:58:27 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:08:00.029 20:58:27 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:08:00.029 20:58:27 llvm_fuzz -- fuzz/llvm.sh@63 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:00.029 20:58:27 llvm_fuzz -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:00.029 20:58:27 llvm_fuzz -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.029 20:58:27 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:00.029 ************************************ 00:08:00.029 START TEST vfio_llvm_fuzz 00:08:00.029 ************************************ 00:08:00.029 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:00.291 * Looking for test storage... 00:08:00.291 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:00.291 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:00.292 #define SPDK_CONFIG_H 00:08:00.292 #define SPDK_CONFIG_APPS 1 00:08:00.292 #define SPDK_CONFIG_ARCH native 00:08:00.292 #undef SPDK_CONFIG_ASAN 00:08:00.292 #undef SPDK_CONFIG_AVAHI 00:08:00.292 #undef SPDK_CONFIG_CET 00:08:00.292 #define SPDK_CONFIG_COVERAGE 1 00:08:00.292 #define SPDK_CONFIG_CROSS_PREFIX 00:08:00.292 #undef SPDK_CONFIG_CRYPTO 00:08:00.292 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:00.292 #undef SPDK_CONFIG_CUSTOMOCF 00:08:00.292 #undef SPDK_CONFIG_DAOS 00:08:00.292 #define SPDK_CONFIG_DAOS_DIR 00:08:00.292 #define SPDK_CONFIG_DEBUG 1 00:08:00.292 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:00.292 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:00.292 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:00.292 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:00.292 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:00.292 #undef SPDK_CONFIG_DPDK_UADK 00:08:00.292 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:00.292 #define SPDK_CONFIG_EXAMPLES 1 00:08:00.292 #undef SPDK_CONFIG_FC 00:08:00.292 #define SPDK_CONFIG_FC_PATH 00:08:00.292 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:00.292 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:00.292 #undef SPDK_CONFIG_FUSE 00:08:00.292 #define SPDK_CONFIG_FUZZER 1 00:08:00.292 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:08:00.292 #undef SPDK_CONFIG_GOLANG 00:08:00.292 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:00.292 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:00.292 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:00.292 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:00.292 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:00.292 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:00.292 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:00.292 #define SPDK_CONFIG_IDXD 1 00:08:00.292 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:00.292 #undef SPDK_CONFIG_IPSEC_MB 00:08:00.292 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:00.292 #define SPDK_CONFIG_ISAL 1 00:08:00.292 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:00.292 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:00.292 #define SPDK_CONFIG_LIBDIR 00:08:00.292 #undef SPDK_CONFIG_LTO 00:08:00.292 #define SPDK_CONFIG_MAX_LCORES 128 00:08:00.292 #define SPDK_CONFIG_NVME_CUSE 1 00:08:00.292 #undef SPDK_CONFIG_OCF 00:08:00.292 #define SPDK_CONFIG_OCF_PATH 00:08:00.292 #define SPDK_CONFIG_OPENSSL_PATH 00:08:00.292 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:00.292 #define SPDK_CONFIG_PGO_DIR 00:08:00.292 #undef SPDK_CONFIG_PGO_USE 00:08:00.292 #define SPDK_CONFIG_PREFIX /usr/local 00:08:00.292 #undef SPDK_CONFIG_RAID5F 00:08:00.292 #undef SPDK_CONFIG_RBD 00:08:00.292 #define SPDK_CONFIG_RDMA 1 00:08:00.292 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:00.292 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:00.292 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:00.292 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:00.292 #undef SPDK_CONFIG_SHARED 00:08:00.292 #undef SPDK_CONFIG_SMA 00:08:00.292 #define SPDK_CONFIG_TESTS 1 00:08:00.292 #undef SPDK_CONFIG_TSAN 00:08:00.292 #define SPDK_CONFIG_UBLK 1 00:08:00.292 #define SPDK_CONFIG_UBSAN 1 00:08:00.292 #undef SPDK_CONFIG_UNIT_TESTS 00:08:00.292 #undef SPDK_CONFIG_URING 00:08:00.292 #define SPDK_CONFIG_URING_PATH 00:08:00.292 #undef SPDK_CONFIG_URING_ZNS 00:08:00.292 #undef SPDK_CONFIG_USDT 00:08:00.292 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:00.292 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:00.292 #define SPDK_CONFIG_VFIO_USER 1 00:08:00.292 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:00.292 #define SPDK_CONFIG_VHOST 1 00:08:00.292 #define SPDK_CONFIG_VIRTIO 1 00:08:00.292 #undef SPDK_CONFIG_VTUNE 00:08:00.292 #define SPDK_CONFIG_VTUNE_DIR 00:08:00.292 #define SPDK_CONFIG_WERROR 1 00:08:00.292 #define SPDK_CONFIG_WPDK_DIR 00:08:00.292 #undef SPDK_CONFIG_XNVME 00:08:00.292 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.292 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:08:00.293 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 1 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 0 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : true 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : 0 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # : 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 0 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # cat 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # export valgrind= 00:08:00.294 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # valgrind= 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@269 -- # uname -s 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@279 -- # MAKE=make 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@299 -- # TEST_MODE= 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@318 -- # [[ -z 791937 ]] 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@318 -- # kill -0 791937 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@331 -- # local mount target_dir 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.DM0ydu 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.DM0ydu/tests/vfio /tmp/spdk.DM0ydu 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@327 -- # df -T 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=954408960 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4330020864 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=53934628864 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742317568 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=7807688704 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30866448384 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871158784 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=12342484992 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348465152 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=5980160 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30870204416 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871158784 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=954368 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=6174224384 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174228480 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:08:00.295 * Looking for test storage... 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@368 -- # local target_space new_size 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mount=/ 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # target_space=53934628864 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@381 -- # new_size=10022281216 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:00.295 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@389 -- # return 0 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # set -o errtrace 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1687 -- # true 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1689 -- # xtrace_fd 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:00.295 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:00.296 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:08:00.296 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:08:00.296 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:00.296 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:00.296 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:00.296 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:08:00.296 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:08:00.296 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:08:00.296 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:00.296 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:00.296 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:08:00.296 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:08:00.296 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:08:00.296 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:08:00.296 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:08:00.296 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:08:00.296 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:08:00.296 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:08:00.296 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:08:00.296 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:00.296 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:08:00.296 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:08:00.296 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:00.296 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:00.296 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:00.296 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:08:00.296 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:08:00.296 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:08:00.296 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:08:00.296 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:00.296 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:00.296 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:00.297 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:08:00.297 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:00.297 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:00.297 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:00.297 20:58:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:08:00.556 [2024-07-15 20:58:27.582586] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:08:00.556 [2024-07-15 20:58:27.582675] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid791978 ] 00:08:00.556 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.556 [2024-07-15 20:58:27.656623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.556 [2024-07-15 20:58:27.732918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.815 INFO: Running with entropic power schedule (0xFF, 100). 00:08:00.815 INFO: Seed: 1856853844 00:08:00.815 INFO: Loaded 1 modules (355427 inline 8-bit counters): 355427 [0x2972d4c, 0x29c99af), 00:08:00.815 INFO: Loaded 1 PC tables (355427 PCs): 355427 [0x29c99b0,0x2f35fe0), 00:08:00.815 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:00.815 INFO: A corpus is not provided, starting from an empty corpus 00:08:00.815 #2 INITED exec/s: 0 rss: 65Mb 00:08:00.815 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:00.815 This may also happen if the target rejected all inputs we tried so far 00:08:00.815 [2024-07-15 20:58:27.965736] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:08:01.333 NEW_FUNC[1/657]: 0x4838a0 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:08:01.333 NEW_FUNC[2/657]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:01.333 #17 NEW cov: 10962 ft: 10891 corp: 2/7b lim: 6 exec/s: 0 rss: 71Mb L: 6/6 MS: 5 ShuffleBytes-ChangeBit-ChangeByte-InsertRepeatedBytes-InsertByte- 00:08:01.333 NEW_FUNC[1/1]: 0x143afb0 in sq_dbl_tailp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:572 00:08:01.333 #22 NEW cov: 10982 ft: 13590 corp: 3/13b lim: 6 exec/s: 0 rss: 72Mb L: 6/6 MS: 5 ChangeBit-InsertRepeatedBytes-CopyPart-CopyPart-CrossOver- 00:08:01.593 NEW_FUNC[1/1]: 0x1a49770 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:01.593 #25 NEW cov: 11002 ft: 14385 corp: 4/19b lim: 6 exec/s: 0 rss: 73Mb L: 6/6 MS: 3 ChangeByte-InsertByte-InsertRepeatedBytes- 00:08:01.852 #36 NEW cov: 11002 ft: 15430 corp: 5/25b lim: 6 exec/s: 36 rss: 73Mb L: 6/6 MS: 1 CrossOver- 00:08:02.111 #47 NEW cov: 11002 ft: 16336 corp: 6/31b lim: 6 exec/s: 47 rss: 73Mb L: 6/6 MS: 1 CopyPart- 00:08:02.111 #48 NEW cov: 11002 ft: 16809 corp: 7/37b lim: 6 exec/s: 48 rss: 73Mb L: 6/6 MS: 1 CrossOver- 00:08:02.370 #57 NEW cov: 11002 ft: 17048 corp: 8/43b lim: 6 exec/s: 57 rss: 73Mb L: 6/6 MS: 4 InsertByte-CrossOver-InsertByte-CrossOver- 00:08:02.629 #58 NEW cov: 11002 ft: 17490 corp: 9/49b lim: 6 exec/s: 58 rss: 74Mb L: 6/6 MS: 1 ChangeBinInt- 00:08:02.629 #59 NEW cov: 11009 ft: 17565 corp: 10/55b lim: 6 exec/s: 59 rss: 74Mb L: 6/6 MS: 1 CopyPart- 00:08:02.888 #61 NEW cov: 11009 ft: 17846 corp: 11/61b lim: 6 exec/s: 30 rss: 74Mb L: 6/6 MS: 2 EraseBytes-CopyPart- 00:08:02.888 #61 DONE cov: 11009 ft: 17846 corp: 11/61b lim: 6 exec/s: 30 rss: 74Mb 00:08:02.888 Done 61 runs in 2 second(s) 00:08:02.888 [2024-07-15 20:58:30.092639] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:08:03.149 20:58:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:08:03.149 20:58:30 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:03.149 20:58:30 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:03.149 20:58:30 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:08:03.149 20:58:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:08:03.149 20:58:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:03.149 20:58:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:03.149 20:58:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:03.149 20:58:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:08:03.149 20:58:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:08:03.149 20:58:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:08:03.149 20:58:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:08:03.149 20:58:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:03.149 20:58:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:03.149 20:58:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:03.149 20:58:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:08:03.149 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:03.149 20:58:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:03.149 20:58:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:03.149 20:58:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:08:03.149 [2024-07-15 20:58:30.387790] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:08:03.149 [2024-07-15 20:58:30.387860] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid792512 ] 00:08:03.149 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.408 [2024-07-15 20:58:30.461199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.408 [2024-07-15 20:58:30.534273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.668 INFO: Running with entropic power schedule (0xFF, 100). 00:08:03.668 INFO: Seed: 375898561 00:08:03.668 INFO: Loaded 1 modules (355427 inline 8-bit counters): 355427 [0x2972d4c, 0x29c99af), 00:08:03.668 INFO: Loaded 1 PC tables (355427 PCs): 355427 [0x29c99b0,0x2f35fe0), 00:08:03.668 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:03.668 INFO: A corpus is not provided, starting from an empty corpus 00:08:03.668 #2 INITED exec/s: 0 rss: 65Mb 00:08:03.668 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:03.668 This may also happen if the target rejected all inputs we tried so far 00:08:03.668 [2024-07-15 20:58:30.790922] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:08:03.668 [2024-07-15 20:58:30.811528] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:03.668 [2024-07-15 20:58:30.811555] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:03.668 [2024-07-15 20:58:30.811573] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:03.927 NEW_FUNC[1/658]: 0x483e40 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:08:03.927 NEW_FUNC[2/658]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:03.927 #46 NEW cov: 10958 ft: 10468 corp: 2/5b lim: 4 exec/s: 0 rss: 71Mb L: 4/4 MS: 4 InsertByte-CopyPart-ChangeByte-CopyPart- 00:08:04.185 [2024-07-15 20:58:31.238060] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:04.185 [2024-07-15 20:58:31.238093] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:04.185 [2024-07-15 20:58:31.238112] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:04.185 NEW_FUNC[1/2]: 0x16cd3d0 in _is_io_flags_valid /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ns_cmd.c:141 00:08:04.185 NEW_FUNC[2/2]: 0x16e9e00 in _nvme_md_excluded_from_xfer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ns_cmd.c:54 00:08:04.185 #47 NEW cov: 10981 ft: 13313 corp: 3/9b lim: 4 exec/s: 0 rss: 72Mb L: 4/4 MS: 1 CopyPart- 00:08:04.185 [2024-07-15 20:58:31.362056] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:04.185 [2024-07-15 20:58:31.362083] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:04.185 [2024-07-15 20:58:31.362102] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:04.185 #51 NEW cov: 10981 ft: 14399 corp: 4/13b lim: 4 exec/s: 0 rss: 73Mb L: 4/4 MS: 4 ShuffleBytes-InsertByte-InsertByte-InsertByte- 00:08:04.445 [2024-07-15 20:58:31.486959] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:04.445 [2024-07-15 20:58:31.486986] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:04.445 [2024-07-15 20:58:31.487004] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:04.445 NEW_FUNC[1/1]: 0x1a49770 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:04.445 #69 NEW cov: 10998 ft: 15018 corp: 5/17b lim: 4 exec/s: 0 rss: 73Mb L: 4/4 MS: 3 EraseBytes-ChangeByte-CopyPart- 00:08:04.445 [2024-07-15 20:58:31.599938] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:04.445 [2024-07-15 20:58:31.599963] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:04.445 [2024-07-15 20:58:31.599982] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:04.445 #70 NEW cov: 10998 ft: 16217 corp: 6/21b lim: 4 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 ChangeByte- 00:08:04.445 [2024-07-15 20:58:31.725073] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:04.445 [2024-07-15 20:58:31.725098] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:04.445 [2024-07-15 20:58:31.725116] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:04.704 #76 NEW cov: 10998 ft: 16245 corp: 7/25b lim: 4 exec/s: 76 rss: 73Mb L: 4/4 MS: 1 ChangeBit- 00:08:04.704 [2024-07-15 20:58:31.840032] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:04.704 [2024-07-15 20:58:31.840057] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:04.704 [2024-07-15 20:58:31.840076] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:04.704 #77 NEW cov: 10998 ft: 16466 corp: 8/29b lim: 4 exec/s: 77 rss: 73Mb L: 4/4 MS: 1 ChangeBit- 00:08:04.704 [2024-07-15 20:58:31.954991] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:04.704 [2024-07-15 20:58:31.955015] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:04.704 [2024-07-15 20:58:31.955033] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:04.963 #78 NEW cov: 10998 ft: 16674 corp: 9/33b lim: 4 exec/s: 78 rss: 73Mb L: 4/4 MS: 1 CrossOver- 00:08:04.963 [2024-07-15 20:58:32.068929] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:04.963 [2024-07-15 20:58:32.068954] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:04.963 [2024-07-15 20:58:32.068973] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:04.963 #79 NEW cov: 10998 ft: 16768 corp: 10/37b lim: 4 exec/s: 79 rss: 74Mb L: 4/4 MS: 1 ShuffleBytes- 00:08:04.963 [2024-07-15 20:58:32.183796] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:04.963 [2024-07-15 20:58:32.183824] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:04.963 [2024-07-15 20:58:32.183842] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:04.963 #80 NEW cov: 10998 ft: 16805 corp: 11/41b lim: 4 exec/s: 80 rss: 74Mb L: 4/4 MS: 1 ChangeByte- 00:08:05.221 [2024-07-15 20:58:32.297781] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:05.221 [2024-07-15 20:58:32.297808] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:05.221 [2024-07-15 20:58:32.297827] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:05.221 #81 NEW cov: 10998 ft: 16938 corp: 12/45b lim: 4 exec/s: 81 rss: 74Mb L: 4/4 MS: 1 CrossOver- 00:08:05.221 [2024-07-15 20:58:32.411787] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:05.221 [2024-07-15 20:58:32.411812] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:05.221 [2024-07-15 20:58:32.411830] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:05.221 #82 NEW cov: 10998 ft: 17045 corp: 13/49b lim: 4 exec/s: 82 rss: 74Mb L: 4/4 MS: 1 CopyPart- 00:08:05.480 [2024-07-15 20:58:32.525679] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:05.480 [2024-07-15 20:58:32.525711] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:05.480 [2024-07-15 20:58:32.525731] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:05.480 #83 NEW cov: 11005 ft: 17092 corp: 14/53b lim: 4 exec/s: 83 rss: 74Mb L: 4/4 MS: 1 ShuffleBytes- 00:08:05.480 [2024-07-15 20:58:32.639755] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:05.480 [2024-07-15 20:58:32.639780] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:05.480 [2024-07-15 20:58:32.639798] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:05.480 #84 NEW cov: 11005 ft: 17329 corp: 15/57b lim: 4 exec/s: 84 rss: 74Mb L: 4/4 MS: 1 CopyPart- 00:08:05.480 [2024-07-15 20:58:32.754896] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:05.480 [2024-07-15 20:58:32.754921] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:05.480 [2024-07-15 20:58:32.754939] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:05.738 #85 NEW cov: 11005 ft: 17387 corp: 16/61b lim: 4 exec/s: 42 rss: 74Mb L: 4/4 MS: 1 ShuffleBytes- 00:08:05.738 #85 DONE cov: 11005 ft: 17387 corp: 16/61b lim: 4 exec/s: 42 rss: 74Mb 00:08:05.738 Done 85 runs in 2 second(s) 00:08:05.738 [2024-07-15 20:58:32.847637] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:08:05.998 20:58:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:08:05.998 20:58:33 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:05.998 20:58:33 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:05.998 20:58:33 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:08:05.998 20:58:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:08:05.998 20:58:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:05.998 20:58:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:05.998 20:58:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:05.998 20:58:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:08:05.998 20:58:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:08:05.998 20:58:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:08:05.998 20:58:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:08:05.998 20:58:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:05.998 20:58:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:05.998 20:58:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:05.998 20:58:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:08:05.998 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:05.998 20:58:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:05.998 20:58:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:05.998 20:58:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:08:05.998 [2024-07-15 20:58:33.138128] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:08:05.998 [2024-07-15 20:58:33.138212] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid793014 ] 00:08:05.998 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.998 [2024-07-15 20:58:33.210911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.998 [2024-07-15 20:58:33.281588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.257 INFO: Running with entropic power schedule (0xFF, 100). 00:08:06.257 INFO: Seed: 3111895214 00:08:06.257 INFO: Loaded 1 modules (355427 inline 8-bit counters): 355427 [0x2972d4c, 0x29c99af), 00:08:06.257 INFO: Loaded 1 PC tables (355427 PCs): 355427 [0x29c99b0,0x2f35fe0), 00:08:06.257 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:06.257 INFO: A corpus is not provided, starting from an empty corpus 00:08:06.257 #2 INITED exec/s: 0 rss: 64Mb 00:08:06.257 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:06.257 This may also happen if the target rejected all inputs we tried so far 00:08:06.257 [2024-07-15 20:58:33.515338] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:08:06.516 [2024-07-15 20:58:33.563275] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:06.775 NEW_FUNC[1/659]: 0x484820 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:08:06.775 NEW_FUNC[2/659]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:06.775 #13 NEW cov: 10950 ft: 10922 corp: 2/9b lim: 8 exec/s: 0 rss: 70Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:08:06.775 [2024-07-15 20:58:34.045890] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:07.034 #19 NEW cov: 10965 ft: 13756 corp: 3/17b lim: 8 exec/s: 0 rss: 72Mb L: 8/8 MS: 1 ChangeBit- 00:08:07.034 [2024-07-15 20:58:34.230376] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:07.293 NEW_FUNC[1/1]: 0x1a49770 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:07.293 #25 NEW cov: 10982 ft: 14748 corp: 4/25b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 ChangeByte- 00:08:07.293 [2024-07-15 20:58:34.424269] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:07.293 #26 NEW cov: 10982 ft: 15120 corp: 5/33b lim: 8 exec/s: 26 rss: 73Mb L: 8/8 MS: 1 ChangeBinInt- 00:08:07.553 [2024-07-15 20:58:34.617399] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:07.553 #31 NEW cov: 10982 ft: 16236 corp: 6/41b lim: 8 exec/s: 31 rss: 73Mb L: 8/8 MS: 5 EraseBytes-ChangeBinInt-ShuffleBytes-ChangeBit-CrossOver- 00:08:07.553 [2024-07-15 20:58:34.809737] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:07.812 #32 NEW cov: 10982 ft: 16530 corp: 7/49b lim: 8 exec/s: 32 rss: 73Mb L: 8/8 MS: 1 ChangeByte- 00:08:07.812 [2024-07-15 20:58:34.996767] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:08.071 #33 NEW cov: 10982 ft: 17139 corp: 8/57b lim: 8 exec/s: 33 rss: 73Mb L: 8/8 MS: 1 CopyPart- 00:08:08.071 [2024-07-15 20:58:35.180467] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:08.071 #34 NEW cov: 10992 ft: 17228 corp: 9/65b lim: 8 exec/s: 34 rss: 73Mb L: 8/8 MS: 1 CopyPart- 00:08:08.330 [2024-07-15 20:58:35.373702] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:08.330 #50 NEW cov: 10992 ft: 17497 corp: 10/73b lim: 8 exec/s: 25 rss: 73Mb L: 8/8 MS: 1 ShuffleBytes- 00:08:08.330 #50 DONE cov: 10992 ft: 17497 corp: 10/73b lim: 8 exec/s: 25 rss: 73Mb 00:08:08.330 Done 50 runs in 2 second(s) 00:08:08.330 [2024-07-15 20:58:35.513637] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:08:08.590 20:58:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:08:08.590 20:58:35 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:08.590 20:58:35 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:08.590 20:58:35 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:08:08.590 20:58:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:08:08.590 20:58:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:08.590 20:58:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:08.590 20:58:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:08.590 20:58:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:08:08.590 20:58:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:08:08.590 20:58:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:08:08.590 20:58:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:08:08.590 20:58:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:08.590 20:58:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:08.590 20:58:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:08.590 20:58:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:08:08.590 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:08.590 20:58:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:08.590 20:58:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:08.590 20:58:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:08:08.590 [2024-07-15 20:58:35.806072] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:08:08.590 [2024-07-15 20:58:35.806165] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid793349 ] 00:08:08.590 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.590 [2024-07-15 20:58:35.880327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.851 [2024-07-15 20:58:35.950009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.851 INFO: Running with entropic power schedule (0xFF, 100). 00:08:08.851 INFO: Seed: 1496935605 00:08:09.111 INFO: Loaded 1 modules (355427 inline 8-bit counters): 355427 [0x2972d4c, 0x29c99af), 00:08:09.111 INFO: Loaded 1 PC tables (355427 PCs): 355427 [0x29c99b0,0x2f35fe0), 00:08:09.111 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:09.111 INFO: A corpus is not provided, starting from an empty corpus 00:08:09.111 #2 INITED exec/s: 0 rss: 65Mb 00:08:09.111 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:09.111 This may also happen if the target rejected all inputs we tried so far 00:08:09.111 [2024-07-15 20:58:36.192135] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:08:09.111 [2024-07-15 20:58:36.264474] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [(nil), (nil)) fd=323 offset=0 prot=0x3: Invalid argument 00:08:09.111 [2024-07-15 20:58:36.264501] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0, 0) offset=0 flags=0x3: Invalid argument 00:08:09.111 [2024-07-15 20:58:36.264511] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Invalid argument 00:08:09.111 [2024-07-15 20:58:36.264529] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:09.679 NEW_FUNC[1/659]: 0x484f00 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:08:09.679 NEW_FUNC[2/659]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:09.679 #89 NEW cov: 10964 ft: 10513 corp: 2/33b lim: 32 exec/s: 0 rss: 71Mb L: 32/32 MS: 2 InsertRepeatedBytes-InsertByte- 00:08:09.679 [2024-07-15 20:58:36.760203] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [(nil), 0x4000000000) fd=325 offset=0 prot=0x3: Cannot allocate memory 00:08:09.679 [2024-07-15 20:58:36.760239] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0, 0x4000000000) offset=0 flags=0x3: Cannot allocate memory 00:08:09.679 [2024-07-15 20:58:36.760250] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Cannot allocate memory 00:08:09.679 [2024-07-15 20:58:36.760267] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:09.679 #100 NEW cov: 10978 ft: 13581 corp: 3/65b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 ChangeBit- 00:08:09.938 NEW_FUNC[1/1]: 0x1a49770 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:09.938 #106 NEW cov: 10999 ft: 15408 corp: 4/97b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 CMP- DE: "\001\000\000\000\005\3429\320"- 00:08:09.938 [2024-07-15 20:58:37.149472] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [0xf5000000000000, 0xf5000000000000) fd=325 offset=0 prot=0x3: Invalid argument 00:08:09.938 [2024-07-15 20:58:37.149495] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0xf5000000000000, 0xf5000000000000) offset=0 flags=0x3: Invalid argument 00:08:09.938 [2024-07-15 20:58:37.149505] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Invalid argument 00:08:09.938 [2024-07-15 20:58:37.149538] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:10.197 #112 NEW cov: 10999 ft: 15826 corp: 5/129b lim: 32 exec/s: 112 rss: 73Mb L: 32/32 MS: 1 CMP- DE: "\000\000\000\365"- 00:08:10.197 [2024-07-15 20:58:37.342067] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [(nil), 0x1000000000) fd=325 offset=0 prot=0x3: Cannot allocate memory 00:08:10.197 [2024-07-15 20:58:37.342090] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0, 0x1000000000) offset=0 flags=0x3: Cannot allocate memory 00:08:10.197 [2024-07-15 20:58:37.342101] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Cannot allocate memory 00:08:10.197 [2024-07-15 20:58:37.342120] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:10.197 #118 NEW cov: 10999 ft: 16514 corp: 6/161b lim: 32 exec/s: 118 rss: 73Mb L: 32/32 MS: 1 ChangeBit- 00:08:10.456 #119 NEW cov: 10999 ft: 16557 corp: 7/193b lim: 32 exec/s: 119 rss: 73Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:10.715 #120 NEW cov: 10999 ft: 17450 corp: 8/225b lim: 32 exec/s: 120 rss: 73Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:10.715 #121 NEW cov: 11006 ft: 17759 corp: 9/257b lim: 32 exec/s: 121 rss: 73Mb L: 32/32 MS: 1 CopyPart- 00:08:10.974 [2024-07-15 20:58:38.058261] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: DMA region size 17149707655904755712 > max 8796093022208 00:08:10.974 [2024-07-15 20:58:38.058291] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0, 0xee00004000000000) offset=0 flags=0x3: No space left on device 00:08:10.974 [2024-07-15 20:58:38.058303] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: No space left on device 00:08:10.974 [2024-07-15 20:58:38.058319] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:10.974 #122 NEW cov: 11006 ft: 17845 corp: 10/289b lim: 32 exec/s: 122 rss: 73Mb L: 32/32 MS: 1 ChangeByte- 00:08:11.233 #123 NEW cov: 11006 ft: 18186 corp: 11/321b lim: 32 exec/s: 61 rss: 73Mb L: 32/32 MS: 1 CopyPart- 00:08:11.233 #123 DONE cov: 11006 ft: 18186 corp: 11/321b lim: 32 exec/s: 61 rss: 73Mb 00:08:11.233 ###### Recommended dictionary. ###### 00:08:11.233 "\001\000\000\000\005\3429\320" # Uses: 0 00:08:11.233 "\000\000\000\365" # Uses: 0 00:08:11.233 ###### End of recommended dictionary. ###### 00:08:11.233 Done 123 runs in 2 second(s) 00:08:11.233 [2024-07-15 20:58:38.362633] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:08:11.492 20:58:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:08:11.492 20:58:38 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:11.492 20:58:38 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:11.492 20:58:38 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:08:11.492 20:58:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:08:11.492 20:58:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:11.492 20:58:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:11.492 20:58:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:11.492 20:58:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:08:11.492 20:58:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:08:11.492 20:58:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:08:11.492 20:58:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:08:11.492 20:58:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:11.493 20:58:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:11.493 20:58:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:11.493 20:58:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:08:11.493 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:11.493 20:58:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:11.493 20:58:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:11.493 20:58:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:08:11.493 [2024-07-15 20:58:38.660207] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:08:11.493 [2024-07-15 20:58:38.660277] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid793881 ] 00:08:11.493 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.493 [2024-07-15 20:58:38.733753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.751 [2024-07-15 20:58:38.805019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.751 INFO: Running with entropic power schedule (0xFF, 100). 00:08:11.751 INFO: Seed: 44956289 00:08:11.751 INFO: Loaded 1 modules (355427 inline 8-bit counters): 355427 [0x2972d4c, 0x29c99af), 00:08:11.751 INFO: Loaded 1 PC tables (355427 PCs): 355427 [0x29c99b0,0x2f35fe0), 00:08:11.751 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:11.751 INFO: A corpus is not provided, starting from an empty corpus 00:08:11.751 #2 INITED exec/s: 0 rss: 66Mb 00:08:11.751 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:11.751 This may also happen if the target rejected all inputs we tried so far 00:08:12.009 [2024-07-15 20:58:39.049709] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:08:12.267 NEW_FUNC[1/659]: 0x485780 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:08:12.267 NEW_FUNC[2/659]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:12.267 #15 NEW cov: 10959 ft: 10876 corp: 2/33b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 3 InsertRepeatedBytes-ChangeByte-CopyPart- 00:08:12.530 #16 NEW cov: 10974 ft: 13695 corp: 3/65b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 CrossOver- 00:08:12.789 NEW_FUNC[1/1]: 0x1a49770 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:12.789 #20 NEW cov: 10991 ft: 15158 corp: 4/97b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 4 EraseBytes-InsertByte-ChangeBinInt-CopyPart- 00:08:12.789 #21 NEW cov: 10991 ft: 16836 corp: 5/129b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:08:13.047 #22 NEW cov: 10991 ft: 17198 corp: 6/161b lim: 32 exec/s: 22 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:13.047 #23 NEW cov: 10991 ft: 17493 corp: 7/193b lim: 32 exec/s: 23 rss: 74Mb L: 32/32 MS: 1 CrossOver- 00:08:13.305 #24 NEW cov: 10991 ft: 17905 corp: 8/225b lim: 32 exec/s: 24 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:08:13.563 #25 NEW cov: 10991 ft: 18186 corp: 9/257b lim: 32 exec/s: 25 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:08:13.563 #31 NEW cov: 10998 ft: 18341 corp: 10/289b lim: 32 exec/s: 31 rss: 75Mb L: 32/32 MS: 1 CrossOver- 00:08:13.821 #32 NEW cov: 10998 ft: 18441 corp: 11/321b lim: 32 exec/s: 32 rss: 75Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:14.080 #35 NEW cov: 10998 ft: 18571 corp: 12/353b lim: 32 exec/s: 17 rss: 75Mb L: 32/32 MS: 3 EraseBytes-ChangeByte-CrossOver- 00:08:14.080 #35 DONE cov: 10998 ft: 18571 corp: 12/353b lim: 32 exec/s: 17 rss: 75Mb 00:08:14.080 Done 35 runs in 2 second(s) 00:08:14.080 [2024-07-15 20:58:41.182641] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:08:14.339 20:58:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:08:14.339 20:58:41 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:14.339 20:58:41 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:14.339 20:58:41 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:08:14.339 20:58:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:08:14.339 20:58:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:14.339 20:58:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:14.339 20:58:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:14.339 20:58:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:08:14.339 20:58:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:08:14.339 20:58:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:08:14.339 20:58:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:08:14.339 20:58:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:14.339 20:58:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:14.339 20:58:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:14.339 20:58:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:08:14.339 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:14.339 20:58:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:14.339 20:58:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:14.339 20:58:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:08:14.339 [2024-07-15 20:58:41.475343] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:08:14.339 [2024-07-15 20:58:41.475414] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid794425 ] 00:08:14.339 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.339 [2024-07-15 20:58:41.547024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.339 [2024-07-15 20:58:41.617977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.651 INFO: Running with entropic power schedule (0xFF, 100). 00:08:14.651 INFO: Seed: 2859954331 00:08:14.651 INFO: Loaded 1 modules (355427 inline 8-bit counters): 355427 [0x2972d4c, 0x29c99af), 00:08:14.651 INFO: Loaded 1 PC tables (355427 PCs): 355427 [0x29c99b0,0x2f35fe0), 00:08:14.651 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:14.651 INFO: A corpus is not provided, starting from an empty corpus 00:08:14.651 #2 INITED exec/s: 0 rss: 65Mb 00:08:14.651 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:14.651 This may also happen if the target rejected all inputs we tried so far 00:08:14.651 [2024-07-15 20:58:41.856710] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:08:14.651 [2024-07-15 20:58:41.909467] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:14.651 [2024-07-15 20:58:41.909513] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:15.168 NEW_FUNC[1/660]: 0x486180 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:08:15.168 NEW_FUNC[2/660]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:15.168 #109 NEW cov: 10969 ft: 10939 corp: 2/14b lim: 13 exec/s: 0 rss: 71Mb L: 13/13 MS: 2 CopyPart-InsertRepeatedBytes- 00:08:15.168 [2024-07-15 20:58:42.382404] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:15.168 [2024-07-15 20:58:42.382450] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:15.427 #110 NEW cov: 10983 ft: 14543 corp: 3/27b lim: 13 exec/s: 0 rss: 72Mb L: 13/13 MS: 1 CrossOver- 00:08:15.427 [2024-07-15 20:58:42.566167] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:15.427 [2024-07-15 20:58:42.566200] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:15.427 NEW_FUNC[1/1]: 0x1a49770 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:15.427 #111 NEW cov: 11000 ft: 15651 corp: 4/40b lim: 13 exec/s: 0 rss: 73Mb L: 13/13 MS: 1 ChangeBit- 00:08:15.685 [2024-07-15 20:58:42.750603] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:15.685 [2024-07-15 20:58:42.750633] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:15.685 #112 NEW cov: 11000 ft: 15953 corp: 5/53b lim: 13 exec/s: 112 rss: 73Mb L: 13/13 MS: 1 ShuffleBytes- 00:08:15.685 [2024-07-15 20:58:42.937114] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:15.685 [2024-07-15 20:58:42.937144] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:15.999 #113 NEW cov: 11000 ft: 16062 corp: 6/66b lim: 13 exec/s: 113 rss: 73Mb L: 13/13 MS: 1 ChangeBinInt- 00:08:15.999 [2024-07-15 20:58:43.120948] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:15.999 [2024-07-15 20:58:43.120977] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:15.999 #119 NEW cov: 11000 ft: 16477 corp: 7/79b lim: 13 exec/s: 119 rss: 73Mb L: 13/13 MS: 1 ChangeBit- 00:08:16.286 [2024-07-15 20:58:43.298507] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:16.286 [2024-07-15 20:58:43.298536] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:16.286 #122 NEW cov: 11000 ft: 17060 corp: 8/92b lim: 13 exec/s: 122 rss: 74Mb L: 13/13 MS: 3 EraseBytes-ShuffleBytes-CopyPart- 00:08:16.286 [2024-07-15 20:58:43.484936] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:16.286 [2024-07-15 20:58:43.484965] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:16.543 #123 NEW cov: 11000 ft: 17289 corp: 9/105b lim: 13 exec/s: 123 rss: 74Mb L: 13/13 MS: 1 ShuffleBytes- 00:08:16.543 [2024-07-15 20:58:43.669871] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:16.543 [2024-07-15 20:58:43.669901] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:16.543 #129 NEW cov: 11007 ft: 17368 corp: 10/118b lim: 13 exec/s: 129 rss: 74Mb L: 13/13 MS: 1 CopyPart- 00:08:16.819 [2024-07-15 20:58:43.855851] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:16.819 [2024-07-15 20:58:43.855881] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:16.819 #130 NEW cov: 11007 ft: 17538 corp: 11/131b lim: 13 exec/s: 65 rss: 74Mb L: 13/13 MS: 1 CMP- DE: "\000\000\000\000\000\000\000N"- 00:08:16.819 #130 DONE cov: 11007 ft: 17538 corp: 11/131b lim: 13 exec/s: 65 rss: 74Mb 00:08:16.819 ###### Recommended dictionary. ###### 00:08:16.819 "\000\000\000\000\000\000\000N" # Uses: 0 00:08:16.819 ###### End of recommended dictionary. ###### 00:08:16.819 Done 130 runs in 2 second(s) 00:08:16.819 [2024-07-15 20:58:43.989654] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:08:17.078 20:58:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:08:17.078 20:58:44 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:17.078 20:58:44 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:17.078 20:58:44 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:08:17.078 20:58:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:08:17.078 20:58:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:17.078 20:58:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:17.078 20:58:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:17.078 20:58:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:08:17.078 20:58:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:08:17.078 20:58:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:08:17.078 20:58:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:08:17.078 20:58:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:17.078 20:58:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:17.078 20:58:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:17.078 20:58:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:08:17.078 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:17.078 20:58:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:17.078 20:58:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:17.078 20:58:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:08:17.078 [2024-07-15 20:58:44.277235] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:08:17.078 [2024-07-15 20:58:44.277307] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid794969 ] 00:08:17.078 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.078 [2024-07-15 20:58:44.350036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.336 [2024-07-15 20:58:44.426131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.336 INFO: Running with entropic power schedule (0xFF, 100). 00:08:17.336 INFO: Seed: 1372987179 00:08:17.336 INFO: Loaded 1 modules (355427 inline 8-bit counters): 355427 [0x2972d4c, 0x29c99af), 00:08:17.336 INFO: Loaded 1 PC tables (355427 PCs): 355427 [0x29c99b0,0x2f35fe0), 00:08:17.336 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:17.336 INFO: A corpus is not provided, starting from an empty corpus 00:08:17.336 #2 INITED exec/s: 0 rss: 65Mb 00:08:17.336 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:17.336 This may also happen if the target rejected all inputs we tried so far 00:08:17.595 [2024-07-15 20:58:44.660887] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:08:17.595 [2024-07-15 20:58:44.714520] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:17.595 [2024-07-15 20:58:44.714552] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:17.853 NEW_FUNC[1/657]: 0x486e70 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:08:17.853 NEW_FUNC[2/657]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:17.853 #21 NEW cov: 10898 ft: 10918 corp: 2/10b lim: 9 exec/s: 0 rss: 71Mb L: 9/9 MS: 4 CopyPart-CopyPart-InsertRepeatedBytes-CMP- DE: "\001\000"- 00:08:18.112 [2024-07-15 20:58:45.182322] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:18.112 [2024-07-15 20:58:45.182364] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:18.112 NEW_FUNC[1/3]: 0x1419c50 in get_nvmf_vfio_user_req /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:5401 00:08:18.112 NEW_FUNC[2/3]: 0x1781200 in nvme_qpair_submit_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:1089 00:08:18.112 #32 NEW cov: 10969 ft: 14860 corp: 3/19b lim: 9 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ChangeBinInt- 00:08:18.112 [2024-07-15 20:58:45.376104] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:18.112 [2024-07-15 20:58:45.376136] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:18.372 NEW_FUNC[1/1]: 0x1a49770 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:18.372 #40 NEW cov: 10989 ft: 15730 corp: 4/28b lim: 9 exec/s: 0 rss: 73Mb L: 9/9 MS: 3 EraseBytes-CrossOver-CrossOver- 00:08:18.372 [2024-07-15 20:58:45.557012] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:18.372 [2024-07-15 20:58:45.557044] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:18.631 #46 NEW cov: 10989 ft: 16563 corp: 5/37b lim: 9 exec/s: 46 rss: 73Mb L: 9/9 MS: 1 PersAutoDict- DE: "\001\000"- 00:08:18.631 [2024-07-15 20:58:45.733325] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:18.631 [2024-07-15 20:58:45.733355] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:18.631 #47 NEW cov: 10989 ft: 16746 corp: 6/46b lim: 9 exec/s: 47 rss: 73Mb L: 9/9 MS: 1 CrossOver- 00:08:18.631 [2024-07-15 20:58:45.915759] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:18.631 [2024-07-15 20:58:45.915788] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:18.890 #48 NEW cov: 10989 ft: 17216 corp: 7/55b lim: 9 exec/s: 48 rss: 74Mb L: 9/9 MS: 1 PersAutoDict- DE: "\001\000"- 00:08:18.890 [2024-07-15 20:58:46.099489] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:18.890 [2024-07-15 20:58:46.099517] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:19.149 #49 NEW cov: 10989 ft: 17314 corp: 8/64b lim: 9 exec/s: 49 rss: 74Mb L: 9/9 MS: 1 ShuffleBytes- 00:08:19.149 [2024-07-15 20:58:46.276700] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:19.149 [2024-07-15 20:58:46.276729] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:19.149 #50 NEW cov: 10989 ft: 17368 corp: 9/73b lim: 9 exec/s: 50 rss: 74Mb L: 9/9 MS: 1 CopyPart- 00:08:19.408 [2024-07-15 20:58:46.460649] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:19.408 [2024-07-15 20:58:46.460679] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:19.408 #51 NEW cov: 10996 ft: 17605 corp: 10/82b lim: 9 exec/s: 51 rss: 74Mb L: 9/9 MS: 1 ShuffleBytes- 00:08:19.408 [2024-07-15 20:58:46.643630] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:19.408 [2024-07-15 20:58:46.643660] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:19.667 #57 NEW cov: 10996 ft: 17627 corp: 11/91b lim: 9 exec/s: 28 rss: 74Mb L: 9/9 MS: 1 ShuffleBytes- 00:08:19.667 #57 DONE cov: 10996 ft: 17627 corp: 11/91b lim: 9 exec/s: 28 rss: 74Mb 00:08:19.667 ###### Recommended dictionary. ###### 00:08:19.667 "\001\000" # Uses: 4 00:08:19.667 ###### End of recommended dictionary. ###### 00:08:19.667 Done 57 runs in 2 second(s) 00:08:19.667 [2024-07-15 20:58:46.774649] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:08:19.925 20:58:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:08:19.925 20:58:47 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:19.925 20:58:47 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:19.925 20:58:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:08:19.925 00:08:19.925 real 0m19.771s 00:08:19.925 user 0m27.691s 00:08:19.925 sys 0m1.780s 00:08:19.925 20:58:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:19.925 20:58:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:19.925 ************************************ 00:08:19.925 END TEST vfio_llvm_fuzz 00:08:19.925 ************************************ 00:08:19.925 20:58:47 llvm_fuzz -- common/autotest_common.sh@1142 -- # return 0 00:08:19.925 20:58:47 llvm_fuzz -- fuzz/llvm.sh@67 -- # [[ 1 -eq 0 ]] 00:08:19.925 00:08:19.925 real 1m24.672s 00:08:19.925 user 2m8.611s 00:08:19.925 sys 0m9.300s 00:08:19.925 20:58:47 llvm_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:19.925 20:58:47 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:19.925 ************************************ 00:08:19.925 END TEST llvm_fuzz 00:08:19.925 ************************************ 00:08:19.925 20:58:47 -- common/autotest_common.sh@1142 -- # return 0 00:08:19.925 20:58:47 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:08:19.925 20:58:47 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:08:19.925 20:58:47 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:08:19.925 20:58:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:19.925 20:58:47 -- common/autotest_common.sh@10 -- # set +x 00:08:19.925 20:58:47 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:08:19.925 20:58:47 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:08:19.925 20:58:47 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:08:19.925 20:58:47 -- common/autotest_common.sh@10 -- # set +x 00:08:26.496 INFO: APP EXITING 00:08:26.496 INFO: killing all VMs 00:08:26.496 INFO: killing vhost app 00:08:26.496 INFO: EXIT DONE 00:08:29.031 Waiting for block devices as requested 00:08:29.031 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:29.031 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:29.031 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:29.031 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:29.290 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:29.291 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:29.291 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:29.291 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:29.550 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:29.550 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:29.550 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:29.809 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:29.809 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:29.809 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:30.069 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:30.069 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:30.069 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:08:34.259 Cleaning 00:08:34.259 Removing: /dev/shm/spdk_tgt_trace.pid759251 00:08:34.259 Removing: /var/run/dpdk/spdk_pid756794 00:08:34.259 Removing: /var/run/dpdk/spdk_pid758043 00:08:34.259 Removing: /var/run/dpdk/spdk_pid759251 00:08:34.259 Removing: /var/run/dpdk/spdk_pid759949 00:08:34.259 Removing: /var/run/dpdk/spdk_pid760789 00:08:34.259 Removing: /var/run/dpdk/spdk_pid761067 00:08:34.259 Removing: /var/run/dpdk/spdk_pid762176 00:08:34.259 Removing: /var/run/dpdk/spdk_pid762200 00:08:34.259 Removing: /var/run/dpdk/spdk_pid762604 00:08:34.259 Removing: /var/run/dpdk/spdk_pid762921 00:08:34.259 Removing: /var/run/dpdk/spdk_pid763238 00:08:34.259 Removing: /var/run/dpdk/spdk_pid763586 00:08:34.259 Removing: /var/run/dpdk/spdk_pid763904 00:08:34.259 Removing: /var/run/dpdk/spdk_pid764190 00:08:34.259 Removing: /var/run/dpdk/spdk_pid764480 00:08:34.259 Removing: /var/run/dpdk/spdk_pid764791 00:08:34.259 Removing: /var/run/dpdk/spdk_pid765647 00:08:34.259 Removing: /var/run/dpdk/spdk_pid768573 00:08:34.259 Removing: /var/run/dpdk/spdk_pid768895 00:08:34.259 Removing: /var/run/dpdk/spdk_pid769329 00:08:34.259 Removing: /var/run/dpdk/spdk_pid769419 00:08:34.259 Removing: /var/run/dpdk/spdk_pid769990 00:08:34.259 Removing: /var/run/dpdk/spdk_pid770249 00:08:34.259 Removing: /var/run/dpdk/spdk_pid770698 00:08:34.259 Removing: /var/run/dpdk/spdk_pid770829 00:08:34.259 Removing: /var/run/dpdk/spdk_pid771129 00:08:34.259 Removing: /var/run/dpdk/spdk_pid771394 00:08:34.259 Removing: /var/run/dpdk/spdk_pid771568 00:08:34.259 Removing: /var/run/dpdk/spdk_pid771707 00:08:34.259 Removing: /var/run/dpdk/spdk_pid772244 00:08:34.259 Removing: /var/run/dpdk/spdk_pid772415 00:08:34.259 Removing: /var/run/dpdk/spdk_pid772651 00:08:34.259 Removing: /var/run/dpdk/spdk_pid772970 00:08:34.259 Removing: /var/run/dpdk/spdk_pid773272 00:08:34.259 Removing: /var/run/dpdk/spdk_pid773297 00:08:34.259 Removing: /var/run/dpdk/spdk_pid773452 00:08:34.259 Removing: /var/run/dpdk/spdk_pid773674 00:08:34.259 Removing: /var/run/dpdk/spdk_pid773938 00:08:34.259 Removing: /var/run/dpdk/spdk_pid774221 00:08:34.259 Removing: /var/run/dpdk/spdk_pid774508 00:08:34.259 Removing: /var/run/dpdk/spdk_pid774787 00:08:34.259 Removing: /var/run/dpdk/spdk_pid775072 00:08:34.259 Removing: /var/run/dpdk/spdk_pid775358 00:08:34.259 Removing: /var/run/dpdk/spdk_pid775714 00:08:34.259 Removing: /var/run/dpdk/spdk_pid776054 00:08:34.259 Removing: /var/run/dpdk/spdk_pid776379 00:08:34.259 Removing: /var/run/dpdk/spdk_pid776975 00:08:34.259 Removing: /var/run/dpdk/spdk_pid777319 00:08:34.259 Removing: /var/run/dpdk/spdk_pid777533 00:08:34.259 Removing: /var/run/dpdk/spdk_pid777761 00:08:34.259 Removing: /var/run/dpdk/spdk_pid777990 00:08:34.259 Removing: /var/run/dpdk/spdk_pid778234 00:08:34.259 Removing: /var/run/dpdk/spdk_pid778519 00:08:34.259 Removing: /var/run/dpdk/spdk_pid778804 00:08:34.259 Removing: /var/run/dpdk/spdk_pid779094 00:08:34.259 Removing: /var/run/dpdk/spdk_pid779374 00:08:34.259 Removing: /var/run/dpdk/spdk_pid779505 00:08:34.259 Removing: /var/run/dpdk/spdk_pid779920 00:08:34.259 Removing: /var/run/dpdk/spdk_pid780516 00:08:34.259 Removing: /var/run/dpdk/spdk_pid781049 00:08:34.259 Removing: /var/run/dpdk/spdk_pid781485 00:08:34.259 Removing: /var/run/dpdk/spdk_pid781871 00:08:34.259 Removing: /var/run/dpdk/spdk_pid782408 00:08:34.259 Removing: /var/run/dpdk/spdk_pid782944 00:08:34.259 Removing: /var/run/dpdk/spdk_pid783235 00:08:34.259 Removing: /var/run/dpdk/spdk_pid783768 00:08:34.259 Removing: /var/run/dpdk/spdk_pid784299 00:08:34.259 Removing: /var/run/dpdk/spdk_pid784596 00:08:34.259 Removing: /var/run/dpdk/spdk_pid785123 00:08:34.259 Removing: /var/run/dpdk/spdk_pid785635 00:08:34.259 Removing: /var/run/dpdk/spdk_pid785943 00:08:34.260 Removing: /var/run/dpdk/spdk_pid786476 00:08:34.260 Removing: /var/run/dpdk/spdk_pid786934 00:08:34.260 Removing: /var/run/dpdk/spdk_pid787295 00:08:34.260 Removing: /var/run/dpdk/spdk_pid787830 00:08:34.260 Removing: /var/run/dpdk/spdk_pid788251 00:08:34.260 Removing: /var/run/dpdk/spdk_pid788655 00:08:34.260 Removing: /var/run/dpdk/spdk_pid789190 00:08:34.260 Removing: /var/run/dpdk/spdk_pid789606 00:08:34.260 Removing: /var/run/dpdk/spdk_pid790016 00:08:34.260 Removing: /var/run/dpdk/spdk_pid790545 00:08:34.260 Removing: /var/run/dpdk/spdk_pid790958 00:08:34.260 Removing: /var/run/dpdk/spdk_pid791369 00:08:34.260 Removing: /var/run/dpdk/spdk_pid791978 00:08:34.260 Removing: /var/run/dpdk/spdk_pid792512 00:08:34.260 Removing: /var/run/dpdk/spdk_pid793014 00:08:34.260 Removing: /var/run/dpdk/spdk_pid793349 00:08:34.260 Removing: /var/run/dpdk/spdk_pid793881 00:08:34.260 Removing: /var/run/dpdk/spdk_pid794425 00:08:34.260 Removing: /var/run/dpdk/spdk_pid794969 00:08:34.260 Clean 00:08:34.260 20:59:01 -- common/autotest_common.sh@1451 -- # return 0 00:08:34.260 20:59:01 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:08:34.260 20:59:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:34.260 20:59:01 -- common/autotest_common.sh@10 -- # set +x 00:08:34.260 20:59:01 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:08:34.260 20:59:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:34.260 20:59:01 -- common/autotest_common.sh@10 -- # set +x 00:08:34.260 20:59:01 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:34.260 20:59:01 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:08:34.260 20:59:01 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:08:34.260 20:59:01 -- spdk/autotest.sh@391 -- # hash lcov 00:08:34.260 20:59:01 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:08:34.260 20:59:01 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:34.260 20:59:01 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:08:34.260 20:59:01 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.260 20:59:01 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.260 20:59:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.260 20:59:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.260 20:59:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.260 20:59:01 -- paths/export.sh@5 -- $ export PATH 00:08:34.260 20:59:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.260 20:59:01 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:08:34.260 20:59:01 -- common/autobuild_common.sh@444 -- $ date +%s 00:08:34.260 20:59:01 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721069941.XXXXXX 00:08:34.260 20:59:01 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721069941.50mhee 00:08:34.260 20:59:01 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:08:34.260 20:59:01 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:08:34.260 20:59:01 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:08:34.260 20:59:01 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:08:34.260 20:59:01 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:08:34.260 20:59:01 -- common/autobuild_common.sh@460 -- $ get_config_params 00:08:34.260 20:59:01 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:08:34.260 20:59:01 -- common/autotest_common.sh@10 -- $ set +x 00:08:34.260 20:59:01 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:08:34.260 20:59:01 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:08:34.260 20:59:01 -- pm/common@17 -- $ local monitor 00:08:34.260 20:59:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:34.260 20:59:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:34.260 20:59:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:34.260 20:59:01 -- pm/common@21 -- $ date +%s 00:08:34.260 20:59:01 -- pm/common@21 -- $ date +%s 00:08:34.260 20:59:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:34.260 20:59:01 -- pm/common@25 -- $ sleep 1 00:08:34.260 20:59:01 -- pm/common@21 -- $ date +%s 00:08:34.260 20:59:01 -- pm/common@21 -- $ date +%s 00:08:34.260 20:59:01 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721069941 00:08:34.260 20:59:01 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721069941 00:08:34.260 20:59:01 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721069941 00:08:34.260 20:59:01 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721069941 00:08:34.260 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721069941_collect-vmstat.pm.log 00:08:34.260 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721069941_collect-cpu-load.pm.log 00:08:34.260 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721069941_collect-cpu-temp.pm.log 00:08:34.260 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721069941_collect-bmc-pm.bmc.pm.log 00:08:35.195 20:59:02 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:08:35.195 20:59:02 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:08:35.195 20:59:02 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:35.195 20:59:02 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:08:35.195 20:59:02 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:08:35.195 20:59:02 -- spdk/autopackage.sh@19 -- $ timing_finish 00:08:35.195 20:59:02 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:08:35.195 20:59:02 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:08:35.195 20:59:02 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:35.454 20:59:02 -- spdk/autopackage.sh@20 -- $ exit 0 00:08:35.454 20:59:02 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:08:35.454 20:59:02 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:35.454 20:59:02 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:35.454 20:59:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:35.454 20:59:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:08:35.454 20:59:02 -- pm/common@44 -- $ pid=802029 00:08:35.454 20:59:02 -- pm/common@50 -- $ kill -TERM 802029 00:08:35.454 20:59:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:35.454 20:59:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:08:35.454 20:59:02 -- pm/common@44 -- $ pid=802031 00:08:35.454 20:59:02 -- pm/common@50 -- $ kill -TERM 802031 00:08:35.454 20:59:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:35.454 20:59:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:08:35.454 20:59:02 -- pm/common@44 -- $ pid=802033 00:08:35.454 20:59:02 -- pm/common@50 -- $ kill -TERM 802033 00:08:35.454 20:59:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:35.454 20:59:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:08:35.454 20:59:02 -- pm/common@44 -- $ pid=802056 00:08:35.454 20:59:02 -- pm/common@50 -- $ sudo -E kill -TERM 802056 00:08:35.454 + [[ -n 651132 ]] 00:08:35.454 + sudo kill 651132 00:08:35.464 [Pipeline] } 00:08:35.485 [Pipeline] // stage 00:08:35.488 [Pipeline] } 00:08:35.504 [Pipeline] // timeout 00:08:35.508 [Pipeline] } 00:08:35.526 [Pipeline] // catchError 00:08:35.531 [Pipeline] } 00:08:35.547 [Pipeline] // wrap 00:08:35.553 [Pipeline] } 00:08:35.568 [Pipeline] // catchError 00:08:35.576 [Pipeline] stage 00:08:35.577 [Pipeline] { (Epilogue) 00:08:35.586 [Pipeline] catchError 00:08:35.586 [Pipeline] { 00:08:35.595 [Pipeline] echo 00:08:35.596 Cleanup processes 00:08:35.599 [Pipeline] sh 00:08:35.877 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:35.877 713195 sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721069605 00:08:35.877 713229 bash /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721069605 00:08:35.877 802198 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:08:35.877 803051 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:35.892 [Pipeline] sh 00:08:36.174 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:36.174 ++ grep -v 'sudo pgrep' 00:08:36.174 ++ awk '{print $1}' 00:08:36.174 + sudo kill -9 713195 713229 802198 00:08:36.184 [Pipeline] sh 00:08:36.464 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:08:36.464 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:08:36.464 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:08:37.858 [Pipeline] sh 00:08:38.141 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:08:38.141 Artifacts sizes are good 00:08:38.162 [Pipeline] archiveArtifacts 00:08:38.171 Archiving artifacts 00:08:38.226 [Pipeline] sh 00:08:38.513 + sudo chown -R sys_sgci /var/jenkins/workspace/short-fuzz-phy-autotest 00:08:38.529 [Pipeline] cleanWs 00:08:38.591 [WS-CLEANUP] Deleting project workspace... 00:08:38.591 [WS-CLEANUP] Deferred wipeout is used... 00:08:38.598 [WS-CLEANUP] done 00:08:38.600 [Pipeline] } 00:08:38.624 [Pipeline] // catchError 00:08:38.639 [Pipeline] sh 00:08:38.924 + logger -p user.info -t JENKINS-CI 00:08:38.934 [Pipeline] } 00:08:38.953 [Pipeline] // stage 00:08:38.960 [Pipeline] } 00:08:38.975 [Pipeline] // node 00:08:38.982 [Pipeline] End of Pipeline 00:08:39.015 Finished: SUCCESS