00:00:00.001 Started by upstream project "autotest-per-patch" build number 132342 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.029 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.031 The recommended git tool is: git 00:00:00.031 using credential 00000000-0000-0000-0000-000000000002 00:00:00.034 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.051 Fetching changes from the remote Git repository 00:00:00.052 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.076 Using shallow fetch with depth 1 00:00:00.076 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.076 > git --version # timeout=10 00:00:00.109 > git --version # 'git version 2.39.2' 00:00:00.109 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.161 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.161 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.419 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.431 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.443 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.443 > git config core.sparsecheckout # timeout=10 00:00:03.455 > git read-tree -mu HEAD # timeout=10 00:00:03.473 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.502 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.502 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.616 [Pipeline] Start of Pipeline 00:00:03.629 [Pipeline] library 00:00:03.630 Loading library shm_lib@master 00:00:03.630 Library shm_lib@master is cached. Copying from home. 00:00:03.648 [Pipeline] node 00:00:03.667 Running on WFP20 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:03.668 [Pipeline] { 00:00:03.683 [Pipeline] catchError 00:00:03.685 [Pipeline] { 00:00:03.696 [Pipeline] wrap 00:00:03.703 [Pipeline] { 00:00:03.713 [Pipeline] stage 00:00:03.714 [Pipeline] { (Prologue) 00:00:03.902 [Pipeline] sh 00:00:04.185 + logger -p user.info -t JENKINS-CI 00:00:04.234 [Pipeline] echo 00:00:04.236 Node: WFP20 00:00:04.244 [Pipeline] sh 00:00:04.539 [Pipeline] setCustomBuildProperty 00:00:04.548 [Pipeline] echo 00:00:04.549 Cleanup processes 00:00:04.553 [Pipeline] sh 00:00:04.835 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.835 3647156 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.849 [Pipeline] sh 00:00:05.124 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:05.124 ++ grep -v 'sudo pgrep' 00:00:05.124 ++ awk '{print $1}' 00:00:05.124 + sudo kill -9 00:00:05.124 + true 00:00:05.141 [Pipeline] cleanWs 00:00:05.149 [WS-CLEANUP] Deleting project workspace... 00:00:05.149 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.154 [WS-CLEANUP] done 00:00:05.157 [Pipeline] setCustomBuildProperty 00:00:05.169 [Pipeline] sh 00:00:05.448 + sudo git config --global --replace-all safe.directory '*' 00:00:05.644 [Pipeline] httpRequest 00:00:05.983 [Pipeline] echo 00:00:05.984 Sorcerer 10.211.164.20 is alive 00:00:05.992 [Pipeline] retry 00:00:05.993 [Pipeline] { 00:00:06.005 [Pipeline] httpRequest 00:00:06.008 HttpMethod: GET 00:00:06.009 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.009 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.012 Response Code: HTTP/1.1 200 OK 00:00:06.012 Success: Status code 200 is in the accepted range: 200,404 00:00:06.013 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.114 [Pipeline] } 00:00:07.128 [Pipeline] // retry 00:00:07.135 [Pipeline] sh 00:00:07.421 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.433 [Pipeline] httpRequest 00:00:07.738 [Pipeline] echo 00:00:07.739 Sorcerer 10.211.164.20 is alive 00:00:07.746 [Pipeline] retry 00:00:07.748 [Pipeline] { 00:00:07.760 [Pipeline] httpRequest 00:00:07.764 HttpMethod: GET 00:00:07.764 URL: http://10.211.164.20/packages/spdk_6745f139b200563199b98ad5eb6bf424010a949d.tar.gz 00:00:07.765 Sending request to url: http://10.211.164.20/packages/spdk_6745f139b200563199b98ad5eb6bf424010a949d.tar.gz 00:00:07.776 Response Code: HTTP/1.1 200 OK 00:00:07.776 Success: Status code 200 is in the accepted range: 200,404 00:00:07.777 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_6745f139b200563199b98ad5eb6bf424010a949d.tar.gz 00:01:02.679 [Pipeline] } 00:01:02.697 [Pipeline] // retry 00:01:02.704 [Pipeline] sh 00:01:02.982 + tar --no-same-owner -xf spdk_6745f139b200563199b98ad5eb6bf424010a949d.tar.gz 00:01:05.522 [Pipeline] sh 00:01:05.805 + git -C spdk log --oneline -n5 00:01:05.805 6745f139b bdev: Relocate _bdev_memory_domain_io_get_buf_cb() close to _bdev_io_submit_ext() 00:01:05.805 866ba5ffe bdev: Factor out checking bounce buffer necessity into helper function 00:01:05.805 57b682926 bdev: Add spdk_dif_ctx and spdk_dif_error into spdk_bdev_io 00:01:05.805 3b58329b1 bdev: Use data_block_size for upper layer buffer if no_metadata is true 00:01:05.805 9b64b1304 bdev: Add APIs get metadata config via desc depending on hide_metadata option 00:01:05.816 [Pipeline] } 00:01:05.828 [Pipeline] // stage 00:01:05.837 [Pipeline] stage 00:01:05.839 [Pipeline] { (Prepare) 00:01:05.857 [Pipeline] writeFile 00:01:05.874 [Pipeline] sh 00:01:06.155 + logger -p user.info -t JENKINS-CI 00:01:06.168 [Pipeline] sh 00:01:06.449 + logger -p user.info -t JENKINS-CI 00:01:06.461 [Pipeline] sh 00:01:06.746 + cat autorun-spdk.conf 00:01:06.746 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.746 SPDK_TEST_FUZZER_SHORT=1 00:01:06.746 SPDK_TEST_FUZZER=1 00:01:06.746 SPDK_TEST_SETUP=1 00:01:06.746 SPDK_RUN_UBSAN=1 00:01:06.753 RUN_NIGHTLY=0 00:01:06.759 [Pipeline] readFile 00:01:06.790 [Pipeline] withEnv 00:01:06.792 [Pipeline] { 00:01:06.811 [Pipeline] sh 00:01:07.098 + set -ex 00:01:07.098 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:01:07.098 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:07.098 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:07.099 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:07.099 ++ SPDK_TEST_FUZZER=1 00:01:07.099 ++ SPDK_TEST_SETUP=1 00:01:07.099 ++ SPDK_RUN_UBSAN=1 00:01:07.099 ++ RUN_NIGHTLY=0 00:01:07.099 + case $SPDK_TEST_NVMF_NICS in 00:01:07.099 + DRIVERS= 00:01:07.099 + [[ -n '' ]] 00:01:07.099 + exit 0 00:01:07.108 [Pipeline] } 00:01:07.126 [Pipeline] // withEnv 00:01:07.133 [Pipeline] } 00:01:07.149 [Pipeline] // stage 00:01:07.161 [Pipeline] catchError 00:01:07.163 [Pipeline] { 00:01:07.179 [Pipeline] timeout 00:01:07.179 Timeout set to expire in 30 min 00:01:07.181 [Pipeline] { 00:01:07.197 [Pipeline] stage 00:01:07.199 [Pipeline] { (Tests) 00:01:07.215 [Pipeline] sh 00:01:07.500 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:07.500 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:07.500 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:01:07.500 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:01:07.500 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:07.500 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:07.500 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:01:07.500 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:07.500 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:07.500 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:07.500 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:01:07.500 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:07.500 + source /etc/os-release 00:01:07.500 ++ NAME='Fedora Linux' 00:01:07.500 ++ VERSION='39 (Cloud Edition)' 00:01:07.500 ++ ID=fedora 00:01:07.500 ++ VERSION_ID=39 00:01:07.500 ++ VERSION_CODENAME= 00:01:07.500 ++ PLATFORM_ID=platform:f39 00:01:07.500 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:07.500 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:07.500 ++ LOGO=fedora-logo-icon 00:01:07.500 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:07.500 ++ HOME_URL=https://fedoraproject.org/ 00:01:07.500 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:07.500 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:07.500 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:07.500 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:07.500 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:07.500 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:07.500 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:07.500 ++ SUPPORT_END=2024-11-12 00:01:07.500 ++ VARIANT='Cloud Edition' 00:01:07.500 ++ VARIANT_ID=cloud 00:01:07.500 + uname -a 00:01:07.500 Linux spdk-wfp-20 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:07.500 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:01:10.789 Hugepages 00:01:10.789 node hugesize free / total 00:01:10.789 node0 1048576kB 0 / 0 00:01:10.789 node0 2048kB 0 / 0 00:01:10.789 node1 1048576kB 0 / 0 00:01:10.789 node1 2048kB 0 / 0 00:01:10.789 00:01:10.789 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:10.789 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:10.789 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:10.789 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:10.789 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:10.789 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:10.789 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:10.789 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:10.789 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:10.789 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:10.789 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:10.789 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:10.789 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:10.789 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:10.789 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:10.789 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:10.789 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:10.789 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:10.789 + rm -f /tmp/spdk-ld-path 00:01:10.789 + source autorun-spdk.conf 00:01:10.789 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.789 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:10.789 ++ SPDK_TEST_FUZZER=1 00:01:10.789 ++ SPDK_TEST_SETUP=1 00:01:10.789 ++ SPDK_RUN_UBSAN=1 00:01:10.789 ++ RUN_NIGHTLY=0 00:01:10.789 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:10.789 + [[ -n '' ]] 00:01:10.789 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:10.789 + for M in /var/spdk/build-*-manifest.txt 00:01:10.789 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:10.789 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:10.789 + for M in /var/spdk/build-*-manifest.txt 00:01:10.789 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:10.789 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:10.789 + for M in /var/spdk/build-*-manifest.txt 00:01:10.789 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:10.789 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:10.789 ++ uname 00:01:10.789 + [[ Linux == \L\i\n\u\x ]] 00:01:10.789 + sudo dmesg -T 00:01:10.789 + sudo dmesg --clear 00:01:10.789 + dmesg_pid=3648610 00:01:10.789 + [[ Fedora Linux == FreeBSD ]] 00:01:10.789 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:10.789 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:10.789 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:10.789 + [[ -x /usr/src/fio-static/fio ]] 00:01:10.789 + export FIO_BIN=/usr/src/fio-static/fio 00:01:10.789 + FIO_BIN=/usr/src/fio-static/fio 00:01:10.789 + sudo dmesg -Tw 00:01:10.789 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:10.789 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:10.789 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:10.789 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:10.789 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:10.789 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:10.789 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:10.789 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:10.789 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:10.789 06:57:15 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:10.789 06:57:15 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:10.789 06:57:15 -- short-fuzz-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.789 06:57:15 -- short-fuzz-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_FUZZER_SHORT=1 00:01:10.789 06:57:15 -- short-fuzz-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_FUZZER=1 00:01:10.789 06:57:15 -- short-fuzz-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_SETUP=1 00:01:10.789 06:57:15 -- short-fuzz-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_UBSAN=1 00:01:10.789 06:57:15 -- short-fuzz-phy-autotest/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:10.789 06:57:15 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:10.789 06:57:15 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:10.789 06:57:15 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:10.789 06:57:15 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:01:10.789 06:57:15 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:10.789 06:57:15 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:10.789 06:57:15 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:10.789 06:57:15 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:10.789 06:57:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:10.789 06:57:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:10.789 06:57:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:10.789 06:57:15 -- paths/export.sh@5 -- $ export PATH 00:01:10.789 06:57:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:10.789 06:57:15 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:01:10.789 06:57:15 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:10.789 06:57:15 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732082235.XXXXXX 00:01:10.789 06:57:15 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732082235.1yvM22 00:01:10.789 06:57:15 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:10.789 06:57:15 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:10.789 06:57:15 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:01:10.789 06:57:15 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:10.789 06:57:15 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:10.789 06:57:15 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:10.789 06:57:15 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:10.789 06:57:15 -- common/autotest_common.sh@10 -- $ set +x 00:01:10.789 06:57:15 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:10.789 06:57:15 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:10.789 06:57:15 -- pm/common@17 -- $ local monitor 00:01:10.789 06:57:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:10.790 06:57:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:10.790 06:57:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:10.790 06:57:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:10.790 06:57:15 -- pm/common@25 -- $ sleep 1 00:01:10.790 06:57:15 -- pm/common@21 -- $ date +%s 00:01:10.790 06:57:15 -- pm/common@21 -- $ date +%s 00:01:10.790 06:57:15 -- pm/common@21 -- $ date +%s 00:01:10.790 06:57:15 -- pm/common@21 -- $ date +%s 00:01:11.049 06:57:15 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732082235 00:01:11.049 06:57:15 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732082235 00:01:11.049 06:57:15 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732082235 00:01:11.049 06:57:15 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732082235 00:01:11.049 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732082235_collect-vmstat.pm.log 00:01:11.049 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732082235_collect-cpu-load.pm.log 00:01:11.049 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732082235_collect-cpu-temp.pm.log 00:01:11.049 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732082235_collect-bmc-pm.bmc.pm.log 00:01:11.984 06:57:16 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:11.984 06:57:16 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:11.984 06:57:16 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:11.984 06:57:16 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:11.984 06:57:16 -- spdk/autobuild.sh@16 -- $ date -u 00:01:11.984 Wed Nov 20 05:57:16 AM UTC 2024 00:01:11.984 06:57:16 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:11.984 v25.01-pre-194-g6745f139b 00:01:11.984 06:57:16 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:11.984 06:57:16 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:11.984 06:57:16 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:11.984 06:57:16 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:11.984 06:57:16 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:11.984 06:57:16 -- common/autotest_common.sh@10 -- $ set +x 00:01:11.984 ************************************ 00:01:11.984 START TEST ubsan 00:01:11.984 ************************************ 00:01:11.984 06:57:16 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:11.984 using ubsan 00:01:11.984 00:01:11.984 real 0m0.000s 00:01:11.984 user 0m0.000s 00:01:11.984 sys 0m0.000s 00:01:11.984 06:57:16 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:11.984 06:57:16 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:11.984 ************************************ 00:01:11.984 END TEST ubsan 00:01:11.984 ************************************ 00:01:11.984 06:57:16 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:11.984 06:57:16 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:11.984 06:57:16 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:11.984 06:57:16 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:01:11.984 06:57:16 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:01:11.984 06:57:16 -- common/autobuild_common.sh@438 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:01:11.984 06:57:16 -- common/autotest_common.sh@1103 -- $ '[' 2 -le 1 ']' 00:01:11.984 06:57:16 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:11.984 06:57:16 -- common/autotest_common.sh@10 -- $ set +x 00:01:11.984 ************************************ 00:01:11.984 START TEST autobuild_llvm_precompile 00:01:11.984 ************************************ 00:01:11.984 06:57:16 autobuild_llvm_precompile -- common/autotest_common.sh@1127 -- $ _llvm_precompile 00:01:11.984 06:57:16 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:01:11.984 06:57:16 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 17.0.6 (Fedora 17.0.6-2.fc39) 00:01:11.984 Target: x86_64-redhat-linux-gnu 00:01:11.984 Thread model: posix 00:01:11.984 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:01:11.984 06:57:16 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=17 00:01:11.984 06:57:16 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-17 00:01:11.984 06:57:16 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-17 00:01:11.984 06:57:16 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-17 00:01:11.984 06:57:16 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-17 00:01:11.984 06:57:16 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:01:11.984 06:57:16 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:11.984 06:57:16 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a ]] 00:01:11.984 06:57:16 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a' 00:01:11.984 06:57:16 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:12.242 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:12.242 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:12.809 Using 'verbs' RDMA provider 00:01:28.627 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:43.517 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:43.517 Creating mk/config.mk...done. 00:01:43.517 Creating mk/cc.flags.mk...done. 00:01:43.517 Type 'make' to build. 00:01:43.517 00:01:43.517 real 0m29.652s 00:01:43.517 user 0m12.978s 00:01:43.517 sys 0m16.029s 00:01:43.517 06:57:46 autobuild_llvm_precompile -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:43.517 06:57:46 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:01:43.517 ************************************ 00:01:43.517 END TEST autobuild_llvm_precompile 00:01:43.517 ************************************ 00:01:43.517 06:57:46 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:43.517 06:57:46 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:43.517 06:57:46 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:43.517 06:57:46 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:43.517 06:57:46 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:43.517 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:43.517 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:43.517 Using 'verbs' RDMA provider 00:01:55.795 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:05.781 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:06.609 Creating mk/config.mk...done. 00:02:06.609 Creating mk/cc.flags.mk...done. 00:02:06.609 Type 'make' to build. 00:02:06.609 06:58:10 -- spdk/autobuild.sh@70 -- $ run_test make make -j112 00:02:06.609 06:58:10 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:06.609 06:58:10 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:06.609 06:58:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:06.609 ************************************ 00:02:06.609 START TEST make 00:02:06.609 ************************************ 00:02:06.609 06:58:10 make -- common/autotest_common.sh@1127 -- $ make -j112 00:02:06.866 make[1]: Nothing to be done for 'all'. 00:02:08.777 The Meson build system 00:02:08.777 Version: 1.5.0 00:02:08.777 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:02:08.777 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:08.777 Build type: native build 00:02:08.777 Project name: libvfio-user 00:02:08.777 Project version: 0.0.1 00:02:08.777 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:02:08.777 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:02:08.777 Host machine cpu family: x86_64 00:02:08.777 Host machine cpu: x86_64 00:02:08.777 Run-time dependency threads found: YES 00:02:08.777 Library dl found: YES 00:02:08.777 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:08.777 Run-time dependency json-c found: YES 0.17 00:02:08.777 Run-time dependency cmocka found: YES 1.1.7 00:02:08.777 Program pytest-3 found: NO 00:02:08.777 Program flake8 found: NO 00:02:08.777 Program misspell-fixer found: NO 00:02:08.777 Program restructuredtext-lint found: NO 00:02:08.777 Program valgrind found: YES (/usr/bin/valgrind) 00:02:08.777 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:08.777 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:08.777 Compiler for C supports arguments -Wwrite-strings: YES 00:02:08.777 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:08.777 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:08.777 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:08.777 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:08.777 Build targets in project: 8 00:02:08.777 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:08.777 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:08.777 00:02:08.777 libvfio-user 0.0.1 00:02:08.777 00:02:08.777 User defined options 00:02:08.777 buildtype : debug 00:02:08.777 default_library: static 00:02:08.777 libdir : /usr/local/lib 00:02:08.777 00:02:08.777 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:08.777 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:08.777 [1/36] Compiling C object samples/lspci.p/lspci.c.o 00:02:08.777 [2/36] Compiling C object samples/null.p/null.c.o 00:02:08.777 [3/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:08.777 [4/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:02:08.777 [5/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:02:08.777 [6/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:08.777 [7/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:02:08.777 [8/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:08.777 [9/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:08.777 [10/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:02:08.777 [11/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:02:09.036 [12/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:09.036 [13/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:09.036 [14/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:09.036 [15/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:09.036 [16/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:02:09.036 [17/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:09.036 [18/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:09.036 [19/36] Compiling C object test/unit_tests.p/mocks.c.o 00:02:09.036 [20/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:09.036 [21/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:02:09.036 [22/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:09.036 [23/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:09.036 [24/36] Compiling C object samples/server.p/server.c.o 00:02:09.036 [25/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:09.036 [26/36] Compiling C object samples/client.p/client.c.o 00:02:09.036 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:02:09.036 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:09.036 [29/36] Linking static target lib/libvfio-user.a 00:02:09.036 [30/36] Linking target samples/client 00:02:09.036 [31/36] Linking target test/unit_tests 00:02:09.036 [32/36] Linking target samples/shadow_ioeventfd_server 00:02:09.036 [33/36] Linking target samples/server 00:02:09.036 [34/36] Linking target samples/gpio-pci-idio-16 00:02:09.036 [35/36] Linking target samples/null 00:02:09.036 [36/36] Linking target samples/lspci 00:02:09.036 INFO: autodetecting backend as ninja 00:02:09.036 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:09.036 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:09.604 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:09.604 ninja: no work to do. 00:02:14.882 The Meson build system 00:02:14.882 Version: 1.5.0 00:02:14.882 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:02:14.882 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:02:14.882 Build type: native build 00:02:14.882 Program cat found: YES (/usr/bin/cat) 00:02:14.882 Project name: DPDK 00:02:14.883 Project version: 24.03.0 00:02:14.883 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:02:14.883 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:02:14.883 Host machine cpu family: x86_64 00:02:14.883 Host machine cpu: x86_64 00:02:14.883 Message: ## Building in Developer Mode ## 00:02:14.883 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:14.883 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:14.883 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:14.883 Program python3 found: YES (/usr/bin/python3) 00:02:14.883 Program cat found: YES (/usr/bin/cat) 00:02:14.883 Compiler for C supports arguments -march=native: YES 00:02:14.883 Checking for size of "void *" : 8 00:02:14.883 Checking for size of "void *" : 8 (cached) 00:02:14.883 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:14.883 Library m found: YES 00:02:14.883 Library numa found: YES 00:02:14.883 Has header "numaif.h" : YES 00:02:14.883 Library fdt found: NO 00:02:14.883 Library execinfo found: NO 00:02:14.883 Has header "execinfo.h" : YES 00:02:14.883 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:14.883 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:14.883 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:14.883 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:14.883 Run-time dependency openssl found: YES 3.1.1 00:02:14.883 Run-time dependency libpcap found: YES 1.10.4 00:02:14.883 Has header "pcap.h" with dependency libpcap: YES 00:02:14.883 Compiler for C supports arguments -Wcast-qual: YES 00:02:14.883 Compiler for C supports arguments -Wdeprecated: YES 00:02:14.883 Compiler for C supports arguments -Wformat: YES 00:02:14.883 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:14.883 Compiler for C supports arguments -Wformat-security: YES 00:02:14.883 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:14.883 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:14.883 Compiler for C supports arguments -Wnested-externs: YES 00:02:14.883 Compiler for C supports arguments -Wold-style-definition: YES 00:02:14.883 Compiler for C supports arguments -Wpointer-arith: YES 00:02:14.883 Compiler for C supports arguments -Wsign-compare: YES 00:02:14.883 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:14.883 Compiler for C supports arguments -Wundef: YES 00:02:14.883 Compiler for C supports arguments -Wwrite-strings: YES 00:02:14.883 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:14.884 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:02:14.884 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:14.884 Program objdump found: YES (/usr/bin/objdump) 00:02:14.884 Compiler for C supports arguments -mavx512f: YES 00:02:14.884 Checking if "AVX512 checking" compiles: YES 00:02:14.884 Fetching value of define "__SSE4_2__" : 1 00:02:14.884 Fetching value of define "__AES__" : 1 00:02:14.884 Fetching value of define "__AVX__" : 1 00:02:14.884 Fetching value of define "__AVX2__" : 1 00:02:14.884 Fetching value of define "__AVX512BW__" : 1 00:02:14.884 Fetching value of define "__AVX512CD__" : 1 00:02:14.884 Fetching value of define "__AVX512DQ__" : 1 00:02:14.884 Fetching value of define "__AVX512F__" : 1 00:02:14.884 Fetching value of define "__AVX512VL__" : 1 00:02:14.884 Fetching value of define "__PCLMUL__" : 1 00:02:14.884 Fetching value of define "__RDRND__" : 1 00:02:14.884 Fetching value of define "__RDSEED__" : 1 00:02:14.884 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:14.884 Fetching value of define "__znver1__" : (undefined) 00:02:14.884 Fetching value of define "__znver2__" : (undefined) 00:02:14.884 Fetching value of define "__znver3__" : (undefined) 00:02:14.884 Fetching value of define "__znver4__" : (undefined) 00:02:14.884 Compiler for C supports arguments -Wno-format-truncation: NO 00:02:14.884 Message: lib/log: Defining dependency "log" 00:02:14.884 Message: lib/kvargs: Defining dependency "kvargs" 00:02:14.884 Message: lib/telemetry: Defining dependency "telemetry" 00:02:14.884 Checking for function "getentropy" : NO 00:02:14.884 Message: lib/eal: Defining dependency "eal" 00:02:14.884 Message: lib/ring: Defining dependency "ring" 00:02:14.884 Message: lib/rcu: Defining dependency "rcu" 00:02:14.884 Message: lib/mempool: Defining dependency "mempool" 00:02:14.884 Message: lib/mbuf: Defining dependency "mbuf" 00:02:14.884 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:14.884 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:14.884 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:14.884 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:14.884 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:14.884 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:14.884 Compiler for C supports arguments -mpclmul: YES 00:02:14.884 Compiler for C supports arguments -maes: YES 00:02:14.884 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:14.884 Compiler for C supports arguments -mavx512bw: YES 00:02:14.884 Compiler for C supports arguments -mavx512dq: YES 00:02:14.884 Compiler for C supports arguments -mavx512vl: YES 00:02:14.884 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:14.884 Compiler for C supports arguments -mavx2: YES 00:02:14.884 Compiler for C supports arguments -mavx: YES 00:02:14.884 Message: lib/net: Defining dependency "net" 00:02:14.884 Message: lib/meter: Defining dependency "meter" 00:02:14.884 Message: lib/ethdev: Defining dependency "ethdev" 00:02:14.884 Message: lib/pci: Defining dependency "pci" 00:02:14.884 Message: lib/cmdline: Defining dependency "cmdline" 00:02:14.884 Message: lib/hash: Defining dependency "hash" 00:02:14.884 Message: lib/timer: Defining dependency "timer" 00:02:14.885 Message: lib/compressdev: Defining dependency "compressdev" 00:02:14.885 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:14.885 Message: lib/dmadev: Defining dependency "dmadev" 00:02:14.885 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:14.885 Message: lib/power: Defining dependency "power" 00:02:14.885 Message: lib/reorder: Defining dependency "reorder" 00:02:14.885 Message: lib/security: Defining dependency "security" 00:02:14.885 Has header "linux/userfaultfd.h" : YES 00:02:14.885 Has header "linux/vduse.h" : YES 00:02:14.885 Message: lib/vhost: Defining dependency "vhost" 00:02:14.885 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:02:14.885 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:14.885 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:14.885 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:14.885 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:14.885 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:14.885 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:14.885 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:14.885 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:14.885 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:14.885 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:14.885 Configuring doxy-api-html.conf using configuration 00:02:14.885 Configuring doxy-api-man.conf using configuration 00:02:14.885 Program mandb found: YES (/usr/bin/mandb) 00:02:14.885 Program sphinx-build found: NO 00:02:14.885 Configuring rte_build_config.h using configuration 00:02:14.885 Message: 00:02:14.885 ================= 00:02:14.885 Applications Enabled 00:02:14.885 ================= 00:02:14.885 00:02:14.885 apps: 00:02:14.885 00:02:14.885 00:02:14.885 Message: 00:02:14.885 ================= 00:02:14.885 Libraries Enabled 00:02:14.885 ================= 00:02:14.885 00:02:14.885 libs: 00:02:14.886 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:14.886 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:14.886 cryptodev, dmadev, power, reorder, security, vhost, 00:02:14.886 00:02:14.886 Message: 00:02:14.886 =============== 00:02:14.886 Drivers Enabled 00:02:14.886 =============== 00:02:14.886 00:02:14.886 common: 00:02:14.886 00:02:14.886 bus: 00:02:14.886 pci, vdev, 00:02:14.886 mempool: 00:02:14.886 ring, 00:02:14.886 dma: 00:02:14.886 00:02:14.886 net: 00:02:14.886 00:02:14.886 crypto: 00:02:14.886 00:02:14.886 compress: 00:02:14.886 00:02:14.886 vdpa: 00:02:14.886 00:02:14.886 00:02:14.886 Message: 00:02:14.886 ================= 00:02:14.886 Content Skipped 00:02:14.886 ================= 00:02:14.886 00:02:14.886 apps: 00:02:14.886 dumpcap: explicitly disabled via build config 00:02:14.886 graph: explicitly disabled via build config 00:02:14.886 pdump: explicitly disabled via build config 00:02:14.886 proc-info: explicitly disabled via build config 00:02:14.886 test-acl: explicitly disabled via build config 00:02:14.886 test-bbdev: explicitly disabled via build config 00:02:14.886 test-cmdline: explicitly disabled via build config 00:02:14.886 test-compress-perf: explicitly disabled via build config 00:02:14.886 test-crypto-perf: explicitly disabled via build config 00:02:14.886 test-dma-perf: explicitly disabled via build config 00:02:14.886 test-eventdev: explicitly disabled via build config 00:02:14.886 test-fib: explicitly disabled via build config 00:02:14.886 test-flow-perf: explicitly disabled via build config 00:02:14.886 test-gpudev: explicitly disabled via build config 00:02:14.886 test-mldev: explicitly disabled via build config 00:02:14.886 test-pipeline: explicitly disabled via build config 00:02:14.886 test-pmd: explicitly disabled via build config 00:02:14.886 test-regex: explicitly disabled via build config 00:02:14.886 test-sad: explicitly disabled via build config 00:02:14.886 test-security-perf: explicitly disabled via build config 00:02:14.886 00:02:14.886 libs: 00:02:14.886 argparse: explicitly disabled via build config 00:02:14.886 metrics: explicitly disabled via build config 00:02:14.886 acl: explicitly disabled via build config 00:02:14.887 bbdev: explicitly disabled via build config 00:02:14.887 bitratestats: explicitly disabled via build config 00:02:14.887 bpf: explicitly disabled via build config 00:02:14.887 cfgfile: explicitly disabled via build config 00:02:14.887 distributor: explicitly disabled via build config 00:02:14.887 efd: explicitly disabled via build config 00:02:14.887 eventdev: explicitly disabled via build config 00:02:14.887 dispatcher: explicitly disabled via build config 00:02:14.887 gpudev: explicitly disabled via build config 00:02:14.887 gro: explicitly disabled via build config 00:02:14.887 gso: explicitly disabled via build config 00:02:14.887 ip_frag: explicitly disabled via build config 00:02:14.887 jobstats: explicitly disabled via build config 00:02:14.887 latencystats: explicitly disabled via build config 00:02:14.887 lpm: explicitly disabled via build config 00:02:14.887 member: explicitly disabled via build config 00:02:14.887 pcapng: explicitly disabled via build config 00:02:14.887 rawdev: explicitly disabled via build config 00:02:14.887 regexdev: explicitly disabled via build config 00:02:14.887 mldev: explicitly disabled via build config 00:02:14.887 rib: explicitly disabled via build config 00:02:14.887 sched: explicitly disabled via build config 00:02:14.887 stack: explicitly disabled via build config 00:02:14.887 ipsec: explicitly disabled via build config 00:02:14.887 pdcp: explicitly disabled via build config 00:02:14.887 fib: explicitly disabled via build config 00:02:14.887 port: explicitly disabled via build config 00:02:14.887 pdump: explicitly disabled via build config 00:02:14.887 table: explicitly disabled via build config 00:02:14.887 pipeline: explicitly disabled via build config 00:02:14.887 graph: explicitly disabled via build config 00:02:14.887 node: explicitly disabled via build config 00:02:14.887 00:02:14.887 drivers: 00:02:14.887 common/cpt: not in enabled drivers build config 00:02:14.887 common/dpaax: not in enabled drivers build config 00:02:14.887 common/iavf: not in enabled drivers build config 00:02:14.887 common/idpf: not in enabled drivers build config 00:02:14.887 common/ionic: not in enabled drivers build config 00:02:14.887 common/mvep: not in enabled drivers build config 00:02:14.887 common/octeontx: not in enabled drivers build config 00:02:14.887 bus/auxiliary: not in enabled drivers build config 00:02:14.887 bus/cdx: not in enabled drivers build config 00:02:14.887 bus/dpaa: not in enabled drivers build config 00:02:14.887 bus/fslmc: not in enabled drivers build config 00:02:14.887 bus/ifpga: not in enabled drivers build config 00:02:14.887 bus/platform: not in enabled drivers build config 00:02:14.887 bus/uacce: not in enabled drivers build config 00:02:14.887 bus/vmbus: not in enabled drivers build config 00:02:14.887 common/cnxk: not in enabled drivers build config 00:02:14.887 common/mlx5: not in enabled drivers build config 00:02:14.887 common/nfp: not in enabled drivers build config 00:02:14.887 common/nitrox: not in enabled drivers build config 00:02:14.887 common/qat: not in enabled drivers build config 00:02:14.888 common/sfc_efx: not in enabled drivers build config 00:02:14.888 mempool/bucket: not in enabled drivers build config 00:02:14.888 mempool/cnxk: not in enabled drivers build config 00:02:14.888 mempool/dpaa: not in enabled drivers build config 00:02:14.888 mempool/dpaa2: not in enabled drivers build config 00:02:14.888 mempool/octeontx: not in enabled drivers build config 00:02:14.888 mempool/stack: not in enabled drivers build config 00:02:14.888 dma/cnxk: not in enabled drivers build config 00:02:14.888 dma/dpaa: not in enabled drivers build config 00:02:14.888 dma/dpaa2: not in enabled drivers build config 00:02:14.888 dma/hisilicon: not in enabled drivers build config 00:02:14.888 dma/idxd: not in enabled drivers build config 00:02:14.888 dma/ioat: not in enabled drivers build config 00:02:14.888 dma/skeleton: not in enabled drivers build config 00:02:14.888 net/af_packet: not in enabled drivers build config 00:02:14.888 net/af_xdp: not in enabled drivers build config 00:02:14.888 net/ark: not in enabled drivers build config 00:02:14.888 net/atlantic: not in enabled drivers build config 00:02:14.888 net/avp: not in enabled drivers build config 00:02:14.888 net/axgbe: not in enabled drivers build config 00:02:14.888 net/bnx2x: not in enabled drivers build config 00:02:14.888 net/bnxt: not in enabled drivers build config 00:02:14.888 net/bonding: not in enabled drivers build config 00:02:14.888 net/cnxk: not in enabled drivers build config 00:02:14.888 net/cpfl: not in enabled drivers build config 00:02:14.888 net/cxgbe: not in enabled drivers build config 00:02:14.888 net/dpaa: not in enabled drivers build config 00:02:14.888 net/dpaa2: not in enabled drivers build config 00:02:14.888 net/e1000: not in enabled drivers build config 00:02:14.888 net/ena: not in enabled drivers build config 00:02:14.888 net/enetc: not in enabled drivers build config 00:02:14.888 net/enetfec: not in enabled drivers build config 00:02:14.888 net/enic: not in enabled drivers build config 00:02:14.888 net/failsafe: not in enabled drivers build config 00:02:14.888 net/fm10k: not in enabled drivers build config 00:02:14.888 net/gve: not in enabled drivers build config 00:02:14.888 net/hinic: not in enabled drivers build config 00:02:14.888 net/hns3: not in enabled drivers build config 00:02:14.888 net/i40e: not in enabled drivers build config 00:02:14.888 net/iavf: not in enabled drivers build config 00:02:14.888 net/ice: not in enabled drivers build config 00:02:14.888 net/idpf: not in enabled drivers build config 00:02:14.888 net/igc: not in enabled drivers build config 00:02:14.888 net/ionic: not in enabled drivers build config 00:02:14.888 net/ipn3ke: not in enabled drivers build config 00:02:14.888 net/ixgbe: not in enabled drivers build config 00:02:14.888 net/mana: not in enabled drivers build config 00:02:14.888 net/memif: not in enabled drivers build config 00:02:14.888 net/mlx4: not in enabled drivers build config 00:02:14.888 net/mlx5: not in enabled drivers build config 00:02:14.888 net/mvneta: not in enabled drivers build config 00:02:14.888 net/mvpp2: not in enabled drivers build config 00:02:14.888 net/netvsc: not in enabled drivers build config 00:02:14.888 net/nfb: not in enabled drivers build config 00:02:14.888 net/nfp: not in enabled drivers build config 00:02:14.888 net/ngbe: not in enabled drivers build config 00:02:14.888 net/null: not in enabled drivers build config 00:02:14.888 net/octeontx: not in enabled drivers build config 00:02:14.888 net/octeon_ep: not in enabled drivers build config 00:02:14.888 net/pcap: not in enabled drivers build config 00:02:14.888 net/pfe: not in enabled drivers build config 00:02:14.888 net/qede: not in enabled drivers build config 00:02:14.888 net/ring: not in enabled drivers build config 00:02:14.888 net/sfc: not in enabled drivers build config 00:02:14.888 net/softnic: not in enabled drivers build config 00:02:14.888 net/tap: not in enabled drivers build config 00:02:14.888 net/thunderx: not in enabled drivers build config 00:02:14.888 net/txgbe: not in enabled drivers build config 00:02:14.888 net/vdev_netvsc: not in enabled drivers build config 00:02:14.888 net/vhost: not in enabled drivers build config 00:02:14.888 net/virtio: not in enabled drivers build config 00:02:14.888 net/vmxnet3: not in enabled drivers build config 00:02:14.888 raw/*: missing internal dependency, "rawdev" 00:02:14.888 crypto/armv8: not in enabled drivers build config 00:02:14.888 crypto/bcmfs: not in enabled drivers build config 00:02:14.888 crypto/caam_jr: not in enabled drivers build config 00:02:14.888 crypto/ccp: not in enabled drivers build config 00:02:14.888 crypto/cnxk: not in enabled drivers build config 00:02:14.888 crypto/dpaa_sec: not in enabled drivers build config 00:02:14.888 crypto/dpaa2_sec: not in enabled drivers build config 00:02:14.888 crypto/ipsec_mb: not in enabled drivers build config 00:02:14.888 crypto/mlx5: not in enabled drivers build config 00:02:14.888 crypto/mvsam: not in enabled drivers build config 00:02:14.888 crypto/nitrox: not in enabled drivers build config 00:02:14.888 crypto/null: not in enabled drivers build config 00:02:14.888 crypto/octeontx: not in enabled drivers build config 00:02:14.888 crypto/openssl: not in enabled drivers build config 00:02:14.888 crypto/scheduler: not in enabled drivers build config 00:02:14.888 crypto/uadk: not in enabled drivers build config 00:02:14.888 crypto/virtio: not in enabled drivers build config 00:02:14.888 compress/isal: not in enabled drivers build config 00:02:14.888 compress/mlx5: not in enabled drivers build config 00:02:14.888 compress/nitrox: not in enabled drivers build config 00:02:14.888 compress/octeontx: not in enabled drivers build config 00:02:14.888 compress/zlib: not in enabled drivers build config 00:02:14.888 regex/*: missing internal dependency, "regexdev" 00:02:14.888 ml/*: missing internal dependency, "mldev" 00:02:14.888 vdpa/ifc: not in enabled drivers build config 00:02:14.888 vdpa/mlx5: not in enabled drivers build config 00:02:14.888 vdpa/nfp: not in enabled drivers build config 00:02:14.888 vdpa/sfc: not in enabled drivers build config 00:02:14.888 event/*: missing internal dependency, "eventdev" 00:02:14.888 baseband/*: missing internal dependency, "bbdev" 00:02:14.888 gpu/*: missing internal dependency, "gpudev" 00:02:14.888 00:02:14.888 00:02:14.888 Build targets in project: 85 00:02:14.888 00:02:14.888 DPDK 24.03.0 00:02:14.888 00:02:14.888 User defined options 00:02:14.888 buildtype : debug 00:02:14.888 default_library : static 00:02:14.888 libdir : lib 00:02:14.888 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:02:14.888 c_args : -fPIC -Werror 00:02:14.888 c_link_args : 00:02:14.888 cpu_instruction_set: native 00:02:14.888 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:14.888 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:14.888 enable_docs : false 00:02:14.888 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:14.888 enable_kmods : false 00:02:14.888 max_lcores : 128 00:02:14.888 tests : false 00:02:14.888 00:02:14.888 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:15.458 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:02:15.458 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:15.458 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:15.458 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:15.458 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:15.458 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:15.458 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:15.458 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:15.458 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:15.458 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:15.458 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:15.458 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:15.458 [12/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:15.458 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:15.458 [14/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:15.458 [15/268] Linking static target lib/librte_kvargs.a 00:02:15.458 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:15.458 [17/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:15.458 [18/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:15.458 [19/268] Linking static target lib/librte_log.a 00:02:15.458 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:15.458 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:15.458 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:15.459 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:15.459 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:15.459 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:15.459 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:15.459 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:15.459 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:15.459 [29/268] Linking static target lib/librte_pci.a 00:02:15.459 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:15.716 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:15.716 [32/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:15.716 [33/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:15.716 [34/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:15.716 [35/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:15.976 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:15.976 [37/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:15.976 [38/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:15.976 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:15.976 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:15.976 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:15.976 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:15.976 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:15.976 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:15.976 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:15.976 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:15.976 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:15.976 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:15.976 [49/268] Linking static target lib/librte_meter.a 00:02:15.976 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:15.976 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:15.976 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:15.976 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:15.976 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:15.976 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:15.976 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:15.976 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:15.976 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:15.976 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:15.976 [60/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:15.976 [61/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:15.976 [62/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:15.976 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:15.976 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:15.976 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:15.976 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:15.976 [67/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:15.976 [68/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:15.976 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:15.976 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:15.976 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:15.976 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:15.976 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:15.976 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:15.976 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:15.976 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:15.976 [77/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.976 [78/268] Linking static target lib/librte_telemetry.a 00:02:15.976 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:15.976 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:15.976 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:15.976 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:15.976 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:15.976 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:15.976 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:15.976 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:15.976 [87/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:15.976 [88/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:15.976 [89/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.976 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:15.976 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:15.976 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:15.976 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:15.976 [94/268] Linking static target lib/librte_ring.a 00:02:15.976 [95/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:15.976 [96/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:15.976 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:15.976 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:15.976 [99/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:15.976 [100/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:15.976 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:15.976 [102/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:15.976 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:15.976 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:15.976 [105/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:15.976 [106/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:15.976 [107/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:15.976 [108/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:16.238 [109/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:16.238 [110/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:16.238 [111/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:16.238 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:16.238 [113/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:16.238 [114/268] Linking static target lib/librte_cmdline.a 00:02:16.238 [115/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:16.238 [116/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:16.238 [117/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:16.238 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:16.238 [119/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:16.238 [120/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:16.238 [121/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:16.238 [122/268] Linking static target lib/librte_timer.a 00:02:16.238 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:16.238 [124/268] Linking static target lib/librte_rcu.a 00:02:16.238 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:16.238 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:16.238 [127/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:16.238 [128/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:16.238 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:16.238 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:16.238 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:16.238 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:16.238 [133/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:16.238 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:16.238 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:16.238 [136/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:16.238 [137/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:16.238 [138/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:16.238 [139/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:16.238 [140/268] Linking static target lib/librte_net.a 00:02:16.238 [141/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:16.238 [142/268] Linking static target lib/librte_eal.a 00:02:16.238 [143/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:16.238 [144/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:16.238 [145/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:16.238 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:16.238 [147/268] Linking static target lib/librte_mempool.a 00:02:16.238 [148/268] Linking static target lib/librte_compressdev.a 00:02:16.238 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:16.238 [150/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:16.238 [151/268] Linking static target lib/librte_hash.a 00:02:16.238 [152/268] Linking static target lib/librte_mbuf.a 00:02:16.238 [153/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.238 [154/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:16.238 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:16.238 [156/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.497 [157/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:16.497 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:16.497 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:16.497 [160/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:16.497 [161/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:16.497 [162/268] Linking static target lib/librte_dmadev.a 00:02:16.497 [163/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.497 [164/268] Linking target lib/librte_log.so.24.1 00:02:16.497 [165/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:16.497 [166/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:16.497 [167/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:16.497 [168/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:16.497 [169/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:16.497 [170/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:16.497 [171/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:16.497 [172/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:16.497 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:16.497 [174/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:16.497 [175/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:16.497 [176/268] Linking static target lib/librte_power.a 00:02:16.497 [177/268] Linking static target lib/librte_cryptodev.a 00:02:16.497 [178/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:16.497 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:16.497 [180/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:16.497 [181/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:16.497 [182/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.497 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:16.497 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:16.497 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:16.497 [186/268] Linking static target lib/librte_reorder.a 00:02:16.498 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:16.498 [188/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:16.498 [189/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.498 [190/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:16.498 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:16.498 [192/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:16.498 [193/268] Linking static target lib/librte_security.a 00:02:16.498 [194/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:16.756 [195/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.756 [196/268] Linking target lib/librte_kvargs.so.24.1 00:02:16.756 [197/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.756 [198/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:16.756 [199/268] Linking target lib/librte_telemetry.so.24.1 00:02:16.756 [200/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:16.756 [201/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:16.756 [202/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:16.756 [203/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:16.756 [204/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:16.756 [205/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:16.756 [206/268] Linking static target drivers/librte_bus_vdev.a 00:02:16.756 [207/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:16.756 [208/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:16.756 [209/268] Linking static target lib/librte_ethdev.a 00:02:16.756 [210/268] Linking static target drivers/librte_mempool_ring.a 00:02:16.756 [211/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:16.756 [212/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:16.756 [213/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:16.756 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:16.756 [215/268] Linking static target drivers/librte_bus_pci.a 00:02:17.015 [216/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:17.015 [217/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.015 [218/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.015 [219/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.274 [220/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.274 [221/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.274 [222/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.274 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.274 [224/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.533 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:17.533 [226/268] Linking static target lib/librte_vhost.a 00:02:17.533 [227/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.533 [228/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.790 [229/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.724 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.658 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.769 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.143 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.143 [234/268] Linking target lib/librte_eal.so.24.1 00:02:29.143 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:29.143 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:29.143 [237/268] Linking target lib/librte_ring.so.24.1 00:02:29.143 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:29.143 [239/268] Linking target lib/librte_timer.so.24.1 00:02:29.143 [240/268] Linking target lib/librte_meter.so.24.1 00:02:29.143 [241/268] Linking target lib/librte_pci.so.24.1 00:02:29.401 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:29.401 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:29.401 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:29.401 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:29.401 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:29.401 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:29.401 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:29.401 [249/268] Linking target lib/librte_mempool.so.24.1 00:02:29.660 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:29.660 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:29.660 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:29.660 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:29.660 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:29.919 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:02:29.919 [256/268] Linking target lib/librte_net.so.24.1 00:02:29.919 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:29.919 [258/268] Linking target lib/librte_compressdev.so.24.1 00:02:29.919 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:29.919 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:29.919 [261/268] Linking target lib/librte_security.so.24.1 00:02:29.919 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:29.919 [263/268] Linking target lib/librte_ethdev.so.24.1 00:02:29.919 [264/268] Linking target lib/librte_hash.so.24.1 00:02:30.178 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:30.178 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:30.178 [267/268] Linking target lib/librte_power.so.24.1 00:02:30.178 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:30.178 INFO: autodetecting backend as ninja 00:02:30.178 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:31.114 CC lib/ut_mock/mock.o 00:02:31.114 CC lib/ut/ut.o 00:02:31.114 CC lib/log/log.o 00:02:31.114 CC lib/log/log_flags.o 00:02:31.114 CC lib/log/log_deprecated.o 00:02:31.373 LIB libspdk_ut_mock.a 00:02:31.373 LIB libspdk_ut.a 00:02:31.373 LIB libspdk_log.a 00:02:31.631 CC lib/util/base64.o 00:02:31.631 CC lib/util/cpuset.o 00:02:31.631 CC lib/util/bit_array.o 00:02:31.632 CC lib/util/crc16.o 00:02:31.632 CC lib/util/crc32.o 00:02:31.632 CC lib/util/crc32c.o 00:02:31.632 CC lib/util/crc64.o 00:02:31.632 CC lib/util/crc32_ieee.o 00:02:31.632 CC lib/util/fd_group.o 00:02:31.632 CC lib/util/dif.o 00:02:31.632 CC lib/util/fd.o 00:02:31.632 CC lib/util/hexlify.o 00:02:31.632 CC lib/util/file.o 00:02:31.632 CC lib/util/iov.o 00:02:31.632 CC lib/util/net.o 00:02:31.632 CC lib/util/math.o 00:02:31.632 CC lib/util/string.o 00:02:31.632 CC lib/util/uuid.o 00:02:31.632 CC lib/util/pipe.o 00:02:31.632 CC lib/util/xor.o 00:02:31.632 CC lib/util/strerror_tls.o 00:02:31.632 CXX lib/trace_parser/trace.o 00:02:31.632 CC lib/util/zipf.o 00:02:31.632 CC lib/util/md5.o 00:02:31.632 CC lib/ioat/ioat.o 00:02:31.632 CC lib/dma/dma.o 00:02:31.891 CC lib/vfio_user/host/vfio_user_pci.o 00:02:31.891 CC lib/vfio_user/host/vfio_user.o 00:02:31.891 LIB libspdk_dma.a 00:02:31.891 LIB libspdk_ioat.a 00:02:31.891 LIB libspdk_vfio_user.a 00:02:31.891 LIB libspdk_util.a 00:02:32.150 LIB libspdk_trace_parser.a 00:02:32.409 CC lib/json/json_parse.o 00:02:32.409 CC lib/json/json_util.o 00:02:32.409 CC lib/json/json_write.o 00:02:32.409 CC lib/vmd/vmd.o 00:02:32.409 CC lib/vmd/led.o 00:02:32.409 CC lib/rdma_utils/rdma_utils.o 00:02:32.409 CC lib/conf/conf.o 00:02:32.409 CC lib/idxd/idxd_kernel.o 00:02:32.409 CC lib/idxd/idxd.o 00:02:32.409 CC lib/idxd/idxd_user.o 00:02:32.409 CC lib/env_dpdk/env.o 00:02:32.409 CC lib/env_dpdk/memory.o 00:02:32.409 CC lib/env_dpdk/pci.o 00:02:32.409 CC lib/env_dpdk/init.o 00:02:32.409 CC lib/env_dpdk/threads.o 00:02:32.409 CC lib/env_dpdk/pci_ioat.o 00:02:32.409 CC lib/env_dpdk/pci_virtio.o 00:02:32.409 CC lib/env_dpdk/pci_vmd.o 00:02:32.409 CC lib/env_dpdk/pci_idxd.o 00:02:32.409 CC lib/env_dpdk/pci_event.o 00:02:32.409 CC lib/env_dpdk/sigbus_handler.o 00:02:32.409 CC lib/env_dpdk/pci_dpdk.o 00:02:32.409 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:32.409 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:32.409 LIB libspdk_conf.a 00:02:32.409 LIB libspdk_json.a 00:02:32.409 LIB libspdk_rdma_utils.a 00:02:32.668 LIB libspdk_idxd.a 00:02:32.668 LIB libspdk_vmd.a 00:02:32.668 CC lib/jsonrpc/jsonrpc_server.o 00:02:32.668 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:32.668 CC lib/jsonrpc/jsonrpc_client.o 00:02:32.668 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:32.668 CC lib/rdma_provider/common.o 00:02:32.668 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:32.927 LIB libspdk_jsonrpc.a 00:02:32.927 LIB libspdk_rdma_provider.a 00:02:33.186 CC lib/rpc/rpc.o 00:02:33.186 LIB libspdk_env_dpdk.a 00:02:33.444 LIB libspdk_rpc.a 00:02:33.703 CC lib/trace/trace.o 00:02:33.703 CC lib/trace/trace_flags.o 00:02:33.703 CC lib/trace/trace_rpc.o 00:02:33.703 CC lib/notify/notify.o 00:02:33.703 CC lib/notify/notify_rpc.o 00:02:33.703 CC lib/keyring/keyring.o 00:02:33.703 CC lib/keyring/keyring_rpc.o 00:02:33.703 LIB libspdk_notify.a 00:02:33.973 LIB libspdk_trace.a 00:02:33.973 LIB libspdk_keyring.a 00:02:34.281 CC lib/thread/thread.o 00:02:34.282 CC lib/thread/iobuf.o 00:02:34.282 CC lib/sock/sock.o 00:02:34.282 CC lib/sock/sock_rpc.o 00:02:34.541 LIB libspdk_sock.a 00:02:34.800 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:34.800 CC lib/nvme/nvme_ctrlr.o 00:02:34.800 CC lib/nvme/nvme_fabric.o 00:02:34.800 CC lib/nvme/nvme_ns_cmd.o 00:02:34.800 CC lib/nvme/nvme_ns.o 00:02:34.800 CC lib/nvme/nvme_qpair.o 00:02:34.800 CC lib/nvme/nvme_pcie_common.o 00:02:34.800 CC lib/nvme/nvme_pcie.o 00:02:34.800 CC lib/nvme/nvme_transport.o 00:02:34.800 CC lib/nvme/nvme.o 00:02:34.800 CC lib/nvme/nvme_quirks.o 00:02:34.800 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:34.800 CC lib/nvme/nvme_discovery.o 00:02:34.800 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:34.800 CC lib/nvme/nvme_tcp.o 00:02:34.800 CC lib/nvme/nvme_opal.o 00:02:34.800 CC lib/nvme/nvme_io_msg.o 00:02:34.800 CC lib/nvme/nvme_poll_group.o 00:02:34.800 CC lib/nvme/nvme_zns.o 00:02:34.800 CC lib/nvme/nvme_stubs.o 00:02:34.800 CC lib/nvme/nvme_auth.o 00:02:34.800 CC lib/nvme/nvme_cuse.o 00:02:34.800 CC lib/nvme/nvme_vfio_user.o 00:02:34.800 CC lib/nvme/nvme_rdma.o 00:02:35.058 LIB libspdk_thread.a 00:02:35.317 CC lib/accel/accel_sw.o 00:02:35.317 CC lib/accel/accel.o 00:02:35.317 CC lib/accel/accel_rpc.o 00:02:35.317 CC lib/blob/blobstore.o 00:02:35.317 CC lib/blob/request.o 00:02:35.317 CC lib/blob/zeroes.o 00:02:35.317 CC lib/blob/blob_bs_dev.o 00:02:35.317 CC lib/init/json_config.o 00:02:35.317 CC lib/init/subsystem.o 00:02:35.317 CC lib/vfu_tgt/tgt_endpoint.o 00:02:35.317 CC lib/init/rpc.o 00:02:35.317 CC lib/init/subsystem_rpc.o 00:02:35.317 CC lib/vfu_tgt/tgt_rpc.o 00:02:35.317 CC lib/fsdev/fsdev.o 00:02:35.317 CC lib/virtio/virtio.o 00:02:35.317 CC lib/fsdev/fsdev_io.o 00:02:35.317 CC lib/virtio/virtio_vhost_user.o 00:02:35.317 CC lib/virtio/virtio_vfio_user.o 00:02:35.317 CC lib/fsdev/fsdev_rpc.o 00:02:35.317 CC lib/virtio/virtio_pci.o 00:02:35.317 LIB libspdk_init.a 00:02:35.576 LIB libspdk_vfu_tgt.a 00:02:35.576 LIB libspdk_virtio.a 00:02:35.576 LIB libspdk_fsdev.a 00:02:35.576 CC lib/event/log_rpc.o 00:02:35.576 CC lib/event/scheduler_static.o 00:02:35.576 CC lib/event/app.o 00:02:35.576 CC lib/event/reactor.o 00:02:35.576 CC lib/event/app_rpc.o 00:02:35.835 LIB libspdk_event.a 00:02:35.835 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:35.835 LIB libspdk_accel.a 00:02:36.094 LIB libspdk_nvme.a 00:02:36.353 CC lib/bdev/bdev_rpc.o 00:02:36.353 CC lib/bdev/bdev.o 00:02:36.353 CC lib/bdev/bdev_zone.o 00:02:36.353 CC lib/bdev/part.o 00:02:36.353 CC lib/bdev/scsi_nvme.o 00:02:36.353 LIB libspdk_fuse_dispatcher.a 00:02:36.921 LIB libspdk_blob.a 00:02:37.180 CC lib/lvol/lvol.o 00:02:37.180 CC lib/blobfs/blobfs.o 00:02:37.180 CC lib/blobfs/tree.o 00:02:37.748 LIB libspdk_lvol.a 00:02:37.748 LIB libspdk_blobfs.a 00:02:38.005 LIB libspdk_bdev.a 00:02:38.264 CC lib/scsi/dev.o 00:02:38.264 CC lib/scsi/lun.o 00:02:38.264 CC lib/scsi/port.o 00:02:38.264 CC lib/scsi/scsi.o 00:02:38.264 CC lib/scsi/scsi_bdev.o 00:02:38.264 CC lib/scsi/task.o 00:02:38.264 CC lib/scsi/scsi_pr.o 00:02:38.264 CC lib/scsi/scsi_rpc.o 00:02:38.264 CC lib/ftl/ftl_layout.o 00:02:38.264 CC lib/ftl/ftl_core.o 00:02:38.264 CC lib/ftl/ftl_debug.o 00:02:38.264 CC lib/ftl/ftl_init.o 00:02:38.264 CC lib/ftl/ftl_io.o 00:02:38.264 CC lib/ftl/ftl_sb.o 00:02:38.264 CC lib/ftl/ftl_l2p.o 00:02:38.264 CC lib/ftl/ftl_band.o 00:02:38.264 CC lib/ftl/ftl_l2p_flat.o 00:02:38.264 CC lib/ftl/ftl_nv_cache.o 00:02:38.264 CC lib/ftl/ftl_band_ops.o 00:02:38.264 CC lib/ftl/ftl_writer.o 00:02:38.264 CC lib/ftl/ftl_rq.o 00:02:38.264 CC lib/ftl/ftl_reloc.o 00:02:38.264 CC lib/ftl/ftl_l2p_cache.o 00:02:38.264 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:38.264 CC lib/ftl/ftl_p2l.o 00:02:38.264 CC lib/ftl/mngt/ftl_mngt.o 00:02:38.264 CC lib/ftl/ftl_p2l_log.o 00:02:38.264 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:38.264 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:38.264 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:38.264 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:38.264 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:38.264 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:38.264 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:38.264 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:38.264 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:38.264 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:38.264 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:38.264 CC lib/nvmf/ctrlr.o 00:02:38.264 CC lib/ftl/utils/ftl_conf.o 00:02:38.264 CC lib/nvmf/ctrlr_discovery.o 00:02:38.264 CC lib/ftl/utils/ftl_md.o 00:02:38.264 CC lib/nvmf/ctrlr_bdev.o 00:02:38.264 CC lib/ftl/utils/ftl_mempool.o 00:02:38.264 CC lib/nvmf/subsystem.o 00:02:38.264 CC lib/ftl/utils/ftl_bitmap.o 00:02:38.264 CC lib/nvmf/nvmf.o 00:02:38.264 CC lib/ftl/utils/ftl_property.o 00:02:38.264 CC lib/nvmf/nvmf_rpc.o 00:02:38.264 CC lib/nvmf/transport.o 00:02:38.264 CC lib/nvmf/stubs.o 00:02:38.264 CC lib/nvmf/tcp.o 00:02:38.264 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:38.264 CC lib/nvmf/mdns_server.o 00:02:38.264 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:38.264 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:38.264 CC lib/nvmf/rdma.o 00:02:38.264 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:38.264 CC lib/nvmf/vfio_user.o 00:02:38.264 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:38.264 CC lib/nbd/nbd_rpc.o 00:02:38.264 CC lib/nbd/nbd.o 00:02:38.264 CC lib/nvmf/auth.o 00:02:38.264 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:38.264 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:38.264 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:38.264 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:38.264 CC lib/ublk/ublk.o 00:02:38.264 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:38.264 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:38.264 CC lib/ublk/ublk_rpc.o 00:02:38.264 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:38.264 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:38.264 CC lib/ftl/base/ftl_base_dev.o 00:02:38.264 CC lib/ftl/base/ftl_base_bdev.o 00:02:38.264 CC lib/ftl/ftl_trace.o 00:02:38.523 LIB libspdk_nbd.a 00:02:38.781 LIB libspdk_scsi.a 00:02:38.781 LIB libspdk_ublk.a 00:02:39.039 LIB libspdk_ftl.a 00:02:39.039 CC lib/iscsi/init_grp.o 00:02:39.039 CC lib/vhost/vhost.o 00:02:39.039 CC lib/iscsi/conn.o 00:02:39.039 CC lib/vhost/vhost_rpc.o 00:02:39.039 CC lib/iscsi/tgt_node.o 00:02:39.039 CC lib/iscsi/iscsi.o 00:02:39.039 CC lib/vhost/vhost_scsi.o 00:02:39.039 CC lib/iscsi/param.o 00:02:39.039 CC lib/vhost/vhost_blk.o 00:02:39.039 CC lib/iscsi/portal_grp.o 00:02:39.039 CC lib/vhost/rte_vhost_user.o 00:02:39.039 CC lib/iscsi/iscsi_subsystem.o 00:02:39.039 CC lib/iscsi/iscsi_rpc.o 00:02:39.039 CC lib/iscsi/task.o 00:02:39.607 LIB libspdk_nvmf.a 00:02:39.607 LIB libspdk_vhost.a 00:02:39.866 LIB libspdk_iscsi.a 00:02:40.125 CC module/env_dpdk/env_dpdk_rpc.o 00:02:40.384 CC module/vfu_device/vfu_virtio.o 00:02:40.384 CC module/vfu_device/vfu_virtio_scsi.o 00:02:40.384 CC module/vfu_device/vfu_virtio_blk.o 00:02:40.384 CC module/vfu_device/vfu_virtio_rpc.o 00:02:40.384 CC module/vfu_device/vfu_virtio_fs.o 00:02:40.384 CC module/accel/ioat/accel_ioat.o 00:02:40.384 CC module/accel/ioat/accel_ioat_rpc.o 00:02:40.384 CC module/accel/iaa/accel_iaa_rpc.o 00:02:40.384 CC module/accel/iaa/accel_iaa.o 00:02:40.384 CC module/scheduler/gscheduler/gscheduler.o 00:02:40.384 CC module/accel/dsa/accel_dsa.o 00:02:40.384 CC module/accel/dsa/accel_dsa_rpc.o 00:02:40.384 CC module/accel/error/accel_error_rpc.o 00:02:40.384 CC module/accel/error/accel_error.o 00:02:40.384 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:40.384 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:40.384 LIB libspdk_env_dpdk_rpc.a 00:02:40.384 CC module/blob/bdev/blob_bdev.o 00:02:40.384 CC module/sock/posix/posix.o 00:02:40.384 CC module/keyring/file/keyring.o 00:02:40.384 CC module/fsdev/aio/fsdev_aio.o 00:02:40.384 CC module/keyring/file/keyring_rpc.o 00:02:40.384 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:40.384 CC module/fsdev/aio/linux_aio_mgr.o 00:02:40.384 CC module/keyring/linux/keyring.o 00:02:40.384 CC module/keyring/linux/keyring_rpc.o 00:02:40.384 LIB libspdk_scheduler_gscheduler.a 00:02:40.384 LIB libspdk_accel_ioat.a 00:02:40.384 LIB libspdk_scheduler_dpdk_governor.a 00:02:40.384 LIB libspdk_scheduler_dynamic.a 00:02:40.384 LIB libspdk_accel_error.a 00:02:40.384 LIB libspdk_accel_iaa.a 00:02:40.384 LIB libspdk_keyring_file.a 00:02:40.384 LIB libspdk_keyring_linux.a 00:02:40.643 LIB libspdk_blob_bdev.a 00:02:40.643 LIB libspdk_accel_dsa.a 00:02:40.643 LIB libspdk_vfu_device.a 00:02:40.911 LIB libspdk_sock_posix.a 00:02:40.911 LIB libspdk_fsdev_aio.a 00:02:40.911 CC module/blobfs/bdev/blobfs_bdev.o 00:02:40.911 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:40.911 CC module/bdev/gpt/vbdev_gpt.o 00:02:40.911 CC module/bdev/gpt/gpt.o 00:02:40.911 CC module/bdev/nvme/bdev_nvme.o 00:02:40.911 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:40.911 CC module/bdev/nvme/nvme_rpc.o 00:02:40.911 CC module/bdev/nvme/vbdev_opal.o 00:02:40.911 CC module/bdev/nvme/bdev_mdns_client.o 00:02:40.911 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:40.911 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:40.911 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:40.911 CC module/bdev/raid/bdev_raid.o 00:02:40.911 CC module/bdev/raid/bdev_raid_rpc.o 00:02:40.911 CC module/bdev/lvol/vbdev_lvol.o 00:02:40.911 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:40.911 CC module/bdev/ftl/bdev_ftl.o 00:02:40.911 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:40.911 CC module/bdev/passthru/vbdev_passthru.o 00:02:40.911 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:40.911 CC module/bdev/raid/bdev_raid_sb.o 00:02:40.911 CC module/bdev/raid/raid1.o 00:02:40.911 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:40.911 CC module/bdev/raid/raid0.o 00:02:40.911 CC module/bdev/error/vbdev_error.o 00:02:40.911 CC module/bdev/raid/concat.o 00:02:40.911 CC module/bdev/error/vbdev_error_rpc.o 00:02:40.911 CC module/bdev/delay/vbdev_delay.o 00:02:40.911 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:40.911 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:40.911 CC module/bdev/null/bdev_null.o 00:02:40.911 CC module/bdev/malloc/bdev_malloc.o 00:02:40.911 CC module/bdev/null/bdev_null_rpc.o 00:02:40.911 CC module/bdev/split/vbdev_split.o 00:02:40.911 CC module/bdev/split/vbdev_split_rpc.o 00:02:40.911 CC module/bdev/aio/bdev_aio.o 00:02:40.911 CC module/bdev/aio/bdev_aio_rpc.o 00:02:40.911 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:40.911 CC module/bdev/iscsi/bdev_iscsi.o 00:02:40.911 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:40.911 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:40.911 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:41.175 LIB libspdk_blobfs_bdev.a 00:02:41.175 LIB libspdk_bdev_gpt.a 00:02:41.175 LIB libspdk_bdev_split.a 00:02:41.175 LIB libspdk_bdev_error.a 00:02:41.175 LIB libspdk_bdev_null.a 00:02:41.175 LIB libspdk_bdev_ftl.a 00:02:41.175 LIB libspdk_bdev_zone_block.a 00:02:41.175 LIB libspdk_bdev_passthru.a 00:02:41.175 LIB libspdk_bdev_aio.a 00:02:41.175 LIB libspdk_bdev_delay.a 00:02:41.175 LIB libspdk_bdev_iscsi.a 00:02:41.175 LIB libspdk_bdev_malloc.a 00:02:41.434 LIB libspdk_bdev_lvol.a 00:02:41.434 LIB libspdk_bdev_virtio.a 00:02:41.693 LIB libspdk_bdev_raid.a 00:02:42.259 LIB libspdk_bdev_nvme.a 00:02:43.197 CC module/event/subsystems/keyring/keyring.o 00:02:43.197 CC module/event/subsystems/vmd/vmd.o 00:02:43.197 CC module/event/subsystems/fsdev/fsdev.o 00:02:43.197 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:43.197 CC module/event/subsystems/iobuf/iobuf.o 00:02:43.197 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:43.197 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:43.197 CC module/event/subsystems/scheduler/scheduler.o 00:02:43.197 CC module/event/subsystems/sock/sock.o 00:02:43.197 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:43.197 LIB libspdk_event_keyring.a 00:02:43.197 LIB libspdk_event_fsdev.a 00:02:43.197 LIB libspdk_event_vmd.a 00:02:43.197 LIB libspdk_event_vhost_blk.a 00:02:43.197 LIB libspdk_event_iobuf.a 00:02:43.197 LIB libspdk_event_sock.a 00:02:43.197 LIB libspdk_event_scheduler.a 00:02:43.197 LIB libspdk_event_vfu_tgt.a 00:02:43.455 CC module/event/subsystems/accel/accel.o 00:02:43.455 LIB libspdk_event_accel.a 00:02:43.715 CC module/event/subsystems/bdev/bdev.o 00:02:43.974 LIB libspdk_event_bdev.a 00:02:44.233 CC module/event/subsystems/scsi/scsi.o 00:02:44.233 CC module/event/subsystems/ublk/ublk.o 00:02:44.233 CC module/event/subsystems/nbd/nbd.o 00:02:44.233 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:44.233 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:44.233 LIB libspdk_event_scsi.a 00:02:44.233 LIB libspdk_event_nbd.a 00:02:44.233 LIB libspdk_event_ublk.a 00:02:44.492 LIB libspdk_event_nvmf.a 00:02:44.751 CC module/event/subsystems/iscsi/iscsi.o 00:02:44.751 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:44.751 LIB libspdk_event_vhost_scsi.a 00:02:44.751 LIB libspdk_event_iscsi.a 00:02:45.009 CC test/rpc_client/rpc_client_test.o 00:02:45.009 TEST_HEADER include/spdk/accel.h 00:02:45.009 TEST_HEADER include/spdk/barrier.h 00:02:45.009 TEST_HEADER include/spdk/accel_module.h 00:02:45.009 TEST_HEADER include/spdk/assert.h 00:02:45.009 TEST_HEADER include/spdk/bdev.h 00:02:45.009 TEST_HEADER include/spdk/bdev_module.h 00:02:45.009 TEST_HEADER include/spdk/base64.h 00:02:45.009 CC app/spdk_top/spdk_top.o 00:02:45.009 CC app/spdk_lspci/spdk_lspci.o 00:02:45.009 TEST_HEADER include/spdk/bdev_zone.h 00:02:45.009 CC app/spdk_nvme_identify/identify.o 00:02:45.009 TEST_HEADER include/spdk/bit_array.h 00:02:45.009 TEST_HEADER include/spdk/bit_pool.h 00:02:45.009 TEST_HEADER include/spdk/blob_bdev.h 00:02:45.009 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:45.009 TEST_HEADER include/spdk/blobfs.h 00:02:45.009 CXX app/trace/trace.o 00:02:45.009 TEST_HEADER include/spdk/conf.h 00:02:45.009 TEST_HEADER include/spdk/blob.h 00:02:45.009 CC app/spdk_nvme_discover/discovery_aer.o 00:02:45.009 TEST_HEADER include/spdk/config.h 00:02:45.009 TEST_HEADER include/spdk/cpuset.h 00:02:45.009 TEST_HEADER include/spdk/crc16.h 00:02:45.009 TEST_HEADER include/spdk/crc32.h 00:02:45.009 TEST_HEADER include/spdk/dif.h 00:02:45.009 TEST_HEADER include/spdk/crc64.h 00:02:45.009 TEST_HEADER include/spdk/endian.h 00:02:45.009 TEST_HEADER include/spdk/dma.h 00:02:45.009 CC app/trace_record/trace_record.o 00:02:45.009 TEST_HEADER include/spdk/env_dpdk.h 00:02:45.009 TEST_HEADER include/spdk/env.h 00:02:45.009 TEST_HEADER include/spdk/event.h 00:02:45.009 TEST_HEADER include/spdk/fd_group.h 00:02:45.009 CC app/spdk_nvme_perf/perf.o 00:02:45.009 TEST_HEADER include/spdk/fd.h 00:02:45.009 TEST_HEADER include/spdk/fsdev.h 00:02:45.009 TEST_HEADER include/spdk/file.h 00:02:45.009 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:45.009 TEST_HEADER include/spdk/ftl.h 00:02:45.009 TEST_HEADER include/spdk/fsdev_module.h 00:02:45.009 TEST_HEADER include/spdk/hexlify.h 00:02:45.009 TEST_HEADER include/spdk/histogram_data.h 00:02:45.009 TEST_HEADER include/spdk/gpt_spec.h 00:02:45.009 TEST_HEADER include/spdk/idxd_spec.h 00:02:45.009 TEST_HEADER include/spdk/ioat.h 00:02:45.009 TEST_HEADER include/spdk/idxd.h 00:02:45.009 TEST_HEADER include/spdk/iscsi_spec.h 00:02:45.009 TEST_HEADER include/spdk/ioat_spec.h 00:02:45.009 TEST_HEADER include/spdk/init.h 00:02:45.009 TEST_HEADER include/spdk/json.h 00:02:45.009 TEST_HEADER include/spdk/keyring_module.h 00:02:45.009 TEST_HEADER include/spdk/jsonrpc.h 00:02:45.009 TEST_HEADER include/spdk/likely.h 00:02:45.009 TEST_HEADER include/spdk/keyring.h 00:02:45.009 TEST_HEADER include/spdk/lvol.h 00:02:45.009 TEST_HEADER include/spdk/log.h 00:02:45.009 TEST_HEADER include/spdk/mmio.h 00:02:45.009 TEST_HEADER include/spdk/md5.h 00:02:45.009 TEST_HEADER include/spdk/memory.h 00:02:45.009 TEST_HEADER include/spdk/nvme.h 00:02:45.009 TEST_HEADER include/spdk/net.h 00:02:45.009 TEST_HEADER include/spdk/notify.h 00:02:45.009 TEST_HEADER include/spdk/nbd.h 00:02:45.009 TEST_HEADER include/spdk/nvme_intel.h 00:02:45.009 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:45.009 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:45.009 TEST_HEADER include/spdk/nvme_zns.h 00:02:45.009 TEST_HEADER include/spdk/nvme_spec.h 00:02:45.009 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:45.009 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:45.009 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:45.009 TEST_HEADER include/spdk/nvmf_spec.h 00:02:45.009 TEST_HEADER include/spdk/nvmf.h 00:02:45.009 CC app/nvmf_tgt/nvmf_main.o 00:02:45.009 TEST_HEADER include/spdk/nvmf_transport.h 00:02:45.009 TEST_HEADER include/spdk/opal.h 00:02:45.009 TEST_HEADER include/spdk/opal_spec.h 00:02:45.009 TEST_HEADER include/spdk/pipe.h 00:02:45.009 TEST_HEADER include/spdk/pci_ids.h 00:02:45.009 TEST_HEADER include/spdk/rpc.h 00:02:45.009 TEST_HEADER include/spdk/reduce.h 00:02:45.009 TEST_HEADER include/spdk/queue.h 00:02:45.009 TEST_HEADER include/spdk/scsi.h 00:02:45.009 TEST_HEADER include/spdk/sock.h 00:02:45.009 TEST_HEADER include/spdk/scsi_spec.h 00:02:45.009 TEST_HEADER include/spdk/scheduler.h 00:02:45.270 CC app/spdk_tgt/spdk_tgt.o 00:02:45.270 CC app/spdk_dd/spdk_dd.o 00:02:45.270 TEST_HEADER include/spdk/stdinc.h 00:02:45.270 TEST_HEADER include/spdk/string.h 00:02:45.270 TEST_HEADER include/spdk/trace.h 00:02:45.270 TEST_HEADER include/spdk/tree.h 00:02:45.270 TEST_HEADER include/spdk/thread.h 00:02:45.270 CC app/iscsi_tgt/iscsi_tgt.o 00:02:45.270 TEST_HEADER include/spdk/trace_parser.h 00:02:45.270 TEST_HEADER include/spdk/uuid.h 00:02:45.270 TEST_HEADER include/spdk/ublk.h 00:02:45.270 TEST_HEADER include/spdk/version.h 00:02:45.270 TEST_HEADER include/spdk/util.h 00:02:45.270 TEST_HEADER include/spdk/vhost.h 00:02:45.270 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:45.270 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:45.271 TEST_HEADER include/spdk/vmd.h 00:02:45.271 TEST_HEADER include/spdk/zipf.h 00:02:45.271 TEST_HEADER include/spdk/xor.h 00:02:45.271 CXX test/cpp_headers/accel_module.o 00:02:45.271 CXX test/cpp_headers/accel.o 00:02:45.271 CXX test/cpp_headers/barrier.o 00:02:45.271 CXX test/cpp_headers/assert.o 00:02:45.271 CXX test/cpp_headers/base64.o 00:02:45.271 CXX test/cpp_headers/bdev.o 00:02:45.271 CXX test/cpp_headers/bdev_zone.o 00:02:45.271 CXX test/cpp_headers/bit_array.o 00:02:45.271 CXX test/cpp_headers/bdev_module.o 00:02:45.271 CXX test/cpp_headers/bit_pool.o 00:02:45.271 CC test/env/pci/pci_ut.o 00:02:45.271 CXX test/cpp_headers/blobfs.o 00:02:45.271 CXX test/cpp_headers/blob_bdev.o 00:02:45.271 CXX test/cpp_headers/blobfs_bdev.o 00:02:45.271 CXX test/cpp_headers/blob.o 00:02:45.271 CXX test/cpp_headers/config.o 00:02:45.271 CXX test/cpp_headers/conf.o 00:02:45.271 CXX test/cpp_headers/cpuset.o 00:02:45.271 CXX test/cpp_headers/crc16.o 00:02:45.271 CXX test/cpp_headers/crc32.o 00:02:45.271 CXX test/cpp_headers/dif.o 00:02:45.271 CXX test/cpp_headers/crc64.o 00:02:45.271 CXX test/cpp_headers/dma.o 00:02:45.271 CC test/env/vtophys/vtophys.o 00:02:45.271 CXX test/cpp_headers/env_dpdk.o 00:02:45.271 CXX test/cpp_headers/endian.o 00:02:45.271 CXX test/cpp_headers/fd_group.o 00:02:45.271 CXX test/cpp_headers/event.o 00:02:45.271 CXX test/cpp_headers/env.o 00:02:45.271 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:45.271 CXX test/cpp_headers/fd.o 00:02:45.271 CXX test/cpp_headers/fsdev.o 00:02:45.271 CXX test/cpp_headers/ftl.o 00:02:45.271 CXX test/cpp_headers/file.o 00:02:45.271 CXX test/cpp_headers/fuse_dispatcher.o 00:02:45.271 CXX test/cpp_headers/fsdev_module.o 00:02:45.271 CXX test/cpp_headers/gpt_spec.o 00:02:45.271 CC test/env/memory/memory_ut.o 00:02:45.271 CXX test/cpp_headers/histogram_data.o 00:02:45.271 CXX test/cpp_headers/hexlify.o 00:02:45.271 CXX test/cpp_headers/idxd.o 00:02:45.271 CXX test/cpp_headers/idxd_spec.o 00:02:45.271 CXX test/cpp_headers/ioat.o 00:02:45.271 CXX test/cpp_headers/init.o 00:02:45.271 CXX test/cpp_headers/ioat_spec.o 00:02:45.271 CXX test/cpp_headers/json.o 00:02:45.271 CXX test/cpp_headers/jsonrpc.o 00:02:45.271 CXX test/cpp_headers/iscsi_spec.o 00:02:45.271 CXX test/cpp_headers/keyring.o 00:02:45.271 CXX test/cpp_headers/keyring_module.o 00:02:45.271 CXX test/cpp_headers/log.o 00:02:45.271 CXX test/cpp_headers/lvol.o 00:02:45.271 CXX test/cpp_headers/likely.o 00:02:45.271 CXX test/cpp_headers/md5.o 00:02:45.271 CXX test/cpp_headers/memory.o 00:02:45.271 CXX test/cpp_headers/mmio.o 00:02:45.271 CC test/app/jsoncat/jsoncat.o 00:02:45.271 CXX test/cpp_headers/nbd.o 00:02:45.271 CC test/thread/poller_perf/poller_perf.o 00:02:45.271 CXX test/cpp_headers/net.o 00:02:45.271 CXX test/cpp_headers/nvme_intel.o 00:02:45.271 CXX test/cpp_headers/notify.o 00:02:45.271 CC test/app/histogram_perf/histogram_perf.o 00:02:45.271 CXX test/cpp_headers/nvme.o 00:02:45.271 CXX test/cpp_headers/nvme_ocssd.o 00:02:45.271 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:45.271 CXX test/cpp_headers/nvme_spec.o 00:02:45.271 CXX test/cpp_headers/nvme_zns.o 00:02:45.271 CC test/thread/lock/spdk_lock.o 00:02:45.271 CXX test/cpp_headers/nvmf_cmd.o 00:02:45.271 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:45.271 CC app/fio/nvme/fio_plugin.o 00:02:45.271 CXX test/cpp_headers/nvmf.o 00:02:45.271 LINK spdk_lspci 00:02:45.271 CC examples/ioat/perf/perf.o 00:02:45.271 CXX test/cpp_headers/nvmf_spec.o 00:02:45.271 CXX test/cpp_headers/nvmf_transport.o 00:02:45.271 CXX test/cpp_headers/opal.o 00:02:45.271 CXX test/cpp_headers/opal_spec.o 00:02:45.271 CXX test/cpp_headers/pci_ids.o 00:02:45.271 CXX test/cpp_headers/pipe.o 00:02:45.271 CC examples/ioat/verify/verify.o 00:02:45.271 CXX test/cpp_headers/queue.o 00:02:45.271 CXX test/cpp_headers/reduce.o 00:02:45.271 CXX test/cpp_headers/rpc.o 00:02:45.271 CXX test/cpp_headers/scheduler.o 00:02:45.271 CXX test/cpp_headers/scsi.o 00:02:45.271 CXX test/cpp_headers/scsi_spec.o 00:02:45.271 CXX test/cpp_headers/sock.o 00:02:45.271 CC test/dma/test_dma/test_dma.o 00:02:45.271 CC examples/util/zipf/zipf.o 00:02:45.271 CC test/app/stub/stub.o 00:02:45.271 CXX test/cpp_headers/stdinc.o 00:02:45.271 CC test/app/bdev_svc/bdev_svc.o 00:02:45.271 LINK rpc_client_test 00:02:45.271 CXX test/cpp_headers/string.o 00:02:45.271 CXX test/cpp_headers/thread.o 00:02:45.271 CC test/env/mem_callbacks/mem_callbacks.o 00:02:45.271 LINK spdk_nvme_discover 00:02:45.271 CC app/fio/bdev/fio_plugin.o 00:02:45.271 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:45.271 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:45.271 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:45.271 LINK spdk_trace_record 00:02:45.271 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:45.271 LINK interrupt_tgt 00:02:45.271 CXX test/cpp_headers/trace.o 00:02:45.271 LINK nvmf_tgt 00:02:45.271 LINK jsoncat 00:02:45.271 CXX test/cpp_headers/trace_parser.o 00:02:45.271 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:45.271 CXX test/cpp_headers/tree.o 00:02:45.271 LINK vtophys 00:02:45.271 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:45.271 CXX test/cpp_headers/ublk.o 00:02:45.271 CXX test/cpp_headers/util.o 00:02:45.271 CXX test/cpp_headers/uuid.o 00:02:45.530 LINK histogram_perf 00:02:45.530 LINK env_dpdk_post_init 00:02:45.530 CXX test/cpp_headers/version.o 00:02:45.530 CXX test/cpp_headers/vfio_user_pci.o 00:02:45.530 CXX test/cpp_headers/vfio_user_spec.o 00:02:45.530 CXX test/cpp_headers/vhost.o 00:02:45.530 CXX test/cpp_headers/vmd.o 00:02:45.530 CXX test/cpp_headers/xor.o 00:02:45.530 CXX test/cpp_headers/zipf.o 00:02:45.530 LINK poller_perf 00:02:45.530 LINK zipf 00:02:45.530 LINK spdk_tgt 00:02:45.530 LINK stub 00:02:45.530 LINK iscsi_tgt 00:02:45.530 LINK verify 00:02:45.530 LINK ioat_perf 00:02:45.530 LINK bdev_svc 00:02:45.530 LINK spdk_trace 00:02:45.530 LINK pci_ut 00:02:45.530 LINK nvme_fuzz 00:02:45.530 LINK llvm_vfio_fuzz 00:02:45.800 LINK test_dma 00:02:45.800 LINK spdk_dd 00:02:45.800 LINK vhost_fuzz 00:02:45.800 LINK spdk_nvme_identify 00:02:45.800 LINK spdk_bdev 00:02:45.800 LINK spdk_nvme 00:02:45.800 LINK spdk_nvme_perf 00:02:45.800 LINK mem_callbacks 00:02:45.800 LINK spdk_top 00:02:45.800 LINK llvm_nvme_fuzz 00:02:46.058 CC examples/vmd/led/led.o 00:02:46.058 CC examples/sock/hello_world/hello_sock.o 00:02:46.058 CC examples/vmd/lsvmd/lsvmd.o 00:02:46.058 CC app/vhost/vhost.o 00:02:46.058 CC examples/thread/thread/thread_ex.o 00:02:46.058 CC examples/idxd/perf/perf.o 00:02:46.058 LINK lsvmd 00:02:46.058 LINK led 00:02:46.058 LINK hello_sock 00:02:46.058 LINK vhost 00:02:46.316 LINK memory_ut 00:02:46.316 LINK thread 00:02:46.316 LINK idxd_perf 00:02:46.316 LINK spdk_lock 00:02:46.316 LINK iscsi_fuzz 00:02:46.881 CC examples/nvme/reconnect/reconnect.o 00:02:46.881 CC examples/nvme/hotplug/hotplug.o 00:02:46.881 CC examples/nvme/abort/abort.o 00:02:46.881 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:46.881 CC examples/nvme/hello_world/hello_world.o 00:02:46.881 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:46.881 CC examples/nvme/arbitration/arbitration.o 00:02:46.881 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:46.881 CC test/event/event_perf/event_perf.o 00:02:46.881 CC test/event/reactor/reactor.o 00:02:46.881 CC test/event/reactor_perf/reactor_perf.o 00:02:46.881 CC test/event/app_repeat/app_repeat.o 00:02:46.881 CC test/event/scheduler/scheduler.o 00:02:47.139 LINK cmb_copy 00:02:47.139 LINK pmr_persistence 00:02:47.139 LINK hello_world 00:02:47.139 LINK hotplug 00:02:47.139 LINK event_perf 00:02:47.139 LINK reactor 00:02:47.139 LINK reconnect 00:02:47.139 LINK reactor_perf 00:02:47.139 LINK abort 00:02:47.139 LINK app_repeat 00:02:47.139 LINK arbitration 00:02:47.139 LINK nvme_manage 00:02:47.139 LINK scheduler 00:02:47.139 CC test/nvme/aer/aer.o 00:02:47.139 CC test/nvme/e2edp/nvme_dp.o 00:02:47.139 CC test/nvme/boot_partition/boot_partition.o 00:02:47.398 CC test/nvme/simple_copy/simple_copy.o 00:02:47.398 CC test/nvme/sgl/sgl.o 00:02:47.398 CC test/nvme/reset/reset.o 00:02:47.398 CC test/nvme/connect_stress/connect_stress.o 00:02:47.398 CC test/nvme/overhead/overhead.o 00:02:47.398 CC test/nvme/compliance/nvme_compliance.o 00:02:47.398 CC test/nvme/reserve/reserve.o 00:02:47.398 CC test/nvme/startup/startup.o 00:02:47.398 CC test/nvme/cuse/cuse.o 00:02:47.398 CC test/accel/dif/dif.o 00:02:47.398 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:47.398 CC test/blobfs/mkfs/mkfs.o 00:02:47.398 CC test/nvme/err_injection/err_injection.o 00:02:47.398 CC test/nvme/fused_ordering/fused_ordering.o 00:02:47.398 CC test/nvme/fdp/fdp.o 00:02:47.398 CC test/lvol/esnap/esnap.o 00:02:47.398 LINK boot_partition 00:02:47.398 LINK connect_stress 00:02:47.398 LINK reserve 00:02:47.398 LINK startup 00:02:47.398 LINK err_injection 00:02:47.398 LINK doorbell_aers 00:02:47.398 LINK simple_copy 00:02:47.398 LINK nvme_dp 00:02:47.398 LINK aer 00:02:47.398 LINK mkfs 00:02:47.398 LINK sgl 00:02:47.398 LINK reset 00:02:47.398 LINK overhead 00:02:47.398 LINK fused_ordering 00:02:47.398 LINK fdp 00:02:47.656 LINK nvme_compliance 00:02:47.656 LINK dif 00:02:47.915 CC examples/accel/perf/accel_perf.o 00:02:47.915 CC examples/blob/hello_world/hello_blob.o 00:02:47.915 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:47.915 CC examples/blob/cli/blobcli.o 00:02:47.915 LINK hello_blob 00:02:48.172 LINK hello_fsdev 00:02:48.172 LINK cuse 00:02:48.172 LINK accel_perf 00:02:48.172 LINK blobcli 00:02:48.881 CC examples/bdev/hello_world/hello_bdev.o 00:02:48.881 CC examples/bdev/bdevperf/bdevperf.o 00:02:49.139 LINK hello_bdev 00:02:49.398 CC test/bdev/bdevio/bdevio.o 00:02:49.398 LINK bdevperf 00:02:49.657 LINK bdevio 00:02:50.593 LINK esnap 00:02:50.853 CC examples/nvmf/nvmf/nvmf.o 00:02:51.112 LINK nvmf 00:02:52.491 00:02:52.491 real 0m45.761s 00:02:52.491 user 6m17.392s 00:02:52.491 sys 2m30.667s 00:02:52.491 06:58:56 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:52.491 06:58:56 make -- common/autotest_common.sh@10 -- $ set +x 00:02:52.491 ************************************ 00:02:52.491 END TEST make 00:02:52.491 ************************************ 00:02:52.491 06:58:56 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:52.491 06:58:56 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:52.491 06:58:56 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:52.491 06:58:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.491 06:58:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:52.491 06:58:56 -- pm/common@44 -- $ pid=3648654 00:02:52.491 06:58:56 -- pm/common@50 -- $ kill -TERM 3648654 00:02:52.491 06:58:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.491 06:58:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:52.491 06:58:56 -- pm/common@44 -- $ pid=3648655 00:02:52.491 06:58:56 -- pm/common@50 -- $ kill -TERM 3648655 00:02:52.491 06:58:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.491 06:58:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:52.491 06:58:56 -- pm/common@44 -- $ pid=3648657 00:02:52.491 06:58:56 -- pm/common@50 -- $ kill -TERM 3648657 00:02:52.491 06:58:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.491 06:58:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:52.491 06:58:56 -- pm/common@44 -- $ pid=3648680 00:02:52.491 06:58:56 -- pm/common@50 -- $ sudo -E kill -TERM 3648680 00:02:52.491 06:58:56 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:52.491 06:58:56 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:02:52.491 06:58:56 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:02:52.491 06:58:56 -- common/autotest_common.sh@1691 -- # lcov --version 00:02:52.491 06:58:56 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:02:52.491 06:58:56 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:02:52.491 06:58:56 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:52.491 06:58:56 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:52.491 06:58:56 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:52.491 06:58:56 -- scripts/common.sh@336 -- # IFS=.-: 00:02:52.491 06:58:56 -- scripts/common.sh@336 -- # read -ra ver1 00:02:52.491 06:58:56 -- scripts/common.sh@337 -- # IFS=.-: 00:02:52.491 06:58:56 -- scripts/common.sh@337 -- # read -ra ver2 00:02:52.491 06:58:56 -- scripts/common.sh@338 -- # local 'op=<' 00:02:52.491 06:58:56 -- scripts/common.sh@340 -- # ver1_l=2 00:02:52.491 06:58:56 -- scripts/common.sh@341 -- # ver2_l=1 00:02:52.491 06:58:56 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:52.491 06:58:56 -- scripts/common.sh@344 -- # case "$op" in 00:02:52.491 06:58:56 -- scripts/common.sh@345 -- # : 1 00:02:52.491 06:58:56 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:52.491 06:58:56 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:52.491 06:58:56 -- scripts/common.sh@365 -- # decimal 1 00:02:52.491 06:58:56 -- scripts/common.sh@353 -- # local d=1 00:02:52.491 06:58:56 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:52.491 06:58:56 -- scripts/common.sh@355 -- # echo 1 00:02:52.491 06:58:56 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:52.491 06:58:56 -- scripts/common.sh@366 -- # decimal 2 00:02:52.491 06:58:56 -- scripts/common.sh@353 -- # local d=2 00:02:52.491 06:58:56 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:52.491 06:58:56 -- scripts/common.sh@355 -- # echo 2 00:02:52.491 06:58:56 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:52.491 06:58:56 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:52.491 06:58:56 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:52.491 06:58:56 -- scripts/common.sh@368 -- # return 0 00:02:52.491 06:58:56 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:52.491 06:58:56 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:02:52.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:52.491 --rc genhtml_branch_coverage=1 00:02:52.491 --rc genhtml_function_coverage=1 00:02:52.491 --rc genhtml_legend=1 00:02:52.491 --rc geninfo_all_blocks=1 00:02:52.491 --rc geninfo_unexecuted_blocks=1 00:02:52.491 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:52.491 ' 00:02:52.491 06:58:56 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:02:52.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:52.491 --rc genhtml_branch_coverage=1 00:02:52.491 --rc genhtml_function_coverage=1 00:02:52.491 --rc genhtml_legend=1 00:02:52.491 --rc geninfo_all_blocks=1 00:02:52.491 --rc geninfo_unexecuted_blocks=1 00:02:52.491 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:52.491 ' 00:02:52.491 06:58:56 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:02:52.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:52.491 --rc genhtml_branch_coverage=1 00:02:52.491 --rc genhtml_function_coverage=1 00:02:52.491 --rc genhtml_legend=1 00:02:52.491 --rc geninfo_all_blocks=1 00:02:52.491 --rc geninfo_unexecuted_blocks=1 00:02:52.491 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:52.491 ' 00:02:52.491 06:58:56 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:02:52.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:52.491 --rc genhtml_branch_coverage=1 00:02:52.491 --rc genhtml_function_coverage=1 00:02:52.491 --rc genhtml_legend=1 00:02:52.491 --rc geninfo_all_blocks=1 00:02:52.491 --rc geninfo_unexecuted_blocks=1 00:02:52.491 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:52.491 ' 00:02:52.491 06:58:56 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:02:52.491 06:58:56 -- nvmf/common.sh@7 -- # uname -s 00:02:52.491 06:58:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:52.491 06:58:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:52.491 06:58:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:52.491 06:58:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:52.491 06:58:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:52.491 06:58:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:52.491 06:58:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:52.491 06:58:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:52.491 06:58:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:52.491 06:58:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:52.491 06:58:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:02:52.491 06:58:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:02:52.491 06:58:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:52.491 06:58:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:52.491 06:58:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:52.491 06:58:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:52.491 06:58:56 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:52.491 06:58:56 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:52.491 06:58:57 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:52.491 06:58:57 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:52.491 06:58:57 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:52.492 06:58:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.492 06:58:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.492 06:58:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.492 06:58:57 -- paths/export.sh@5 -- # export PATH 00:02:52.492 06:58:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.492 06:58:57 -- nvmf/common.sh@51 -- # : 0 00:02:52.492 06:58:57 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:52.492 06:58:57 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:52.492 06:58:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:52.492 06:58:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:52.492 06:58:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:52.492 06:58:57 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:52.492 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:52.492 06:58:57 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:52.492 06:58:57 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:52.492 06:58:57 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:52.492 06:58:57 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:52.492 06:58:57 -- spdk/autotest.sh@32 -- # uname -s 00:02:52.492 06:58:57 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:52.492 06:58:57 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:52.492 06:58:57 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:52.492 06:58:57 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:52.492 06:58:57 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:52.492 06:58:57 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:52.492 06:58:57 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:52.492 06:58:57 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:52.492 06:58:57 -- spdk/autotest.sh@48 -- # udevadm_pid=3711972 00:02:52.492 06:58:57 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:52.492 06:58:57 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:52.492 06:58:57 -- pm/common@17 -- # local monitor 00:02:52.492 06:58:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.492 06:58:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.492 06:58:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.492 06:58:57 -- pm/common@21 -- # date +%s 00:02:52.492 06:58:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.492 06:58:57 -- pm/common@21 -- # date +%s 00:02:52.492 06:58:57 -- pm/common@25 -- # sleep 1 00:02:52.492 06:58:57 -- pm/common@21 -- # date +%s 00:02:52.492 06:58:57 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732082337 00:02:52.492 06:58:57 -- pm/common@21 -- # date +%s 00:02:52.492 06:58:57 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732082337 00:02:52.492 06:58:57 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732082337 00:02:52.492 06:58:57 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732082337 00:02:52.751 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732082337_collect-cpu-load.pm.log 00:02:52.751 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732082337_collect-vmstat.pm.log 00:02:52.751 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732082337_collect-cpu-temp.pm.log 00:02:52.751 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732082337_collect-bmc-pm.bmc.pm.log 00:02:53.689 06:58:58 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:53.689 06:58:58 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:53.689 06:58:58 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:53.689 06:58:58 -- common/autotest_common.sh@10 -- # set +x 00:02:53.689 06:58:58 -- spdk/autotest.sh@59 -- # create_test_list 00:02:53.689 06:58:58 -- common/autotest_common.sh@750 -- # xtrace_disable 00:02:53.689 06:58:58 -- common/autotest_common.sh@10 -- # set +x 00:02:53.689 06:58:58 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:02:53.689 06:58:58 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:53.689 06:58:58 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:53.689 06:58:58 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:53.689 06:58:58 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:53.689 06:58:58 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:53.689 06:58:58 -- common/autotest_common.sh@1455 -- # uname 00:02:53.689 06:58:58 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:53.689 06:58:58 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:53.689 06:58:58 -- common/autotest_common.sh@1475 -- # uname 00:02:53.689 06:58:58 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:53.689 06:58:58 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:53.689 06:58:58 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh --version 00:02:53.689 lcov: LCOV version 1.15 00:02:53.689 06:58:58 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info 00:03:01.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:07.092 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcno 00:03:09.632 06:59:13 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:09.632 06:59:13 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:09.632 06:59:13 -- common/autotest_common.sh@10 -- # set +x 00:03:09.632 06:59:13 -- spdk/autotest.sh@78 -- # rm -f 00:03:09.632 06:59:13 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:12.922 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:12.922 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:12.922 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:12.922 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:12.922 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:12.922 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:12.922 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:12.922 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:12.922 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:12.922 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:12.922 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:12.922 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:12.922 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:12.922 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:13.181 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:13.181 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:13.181 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:13.181 06:59:17 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:13.181 06:59:17 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:13.181 06:59:17 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:13.181 06:59:17 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:13.181 06:59:17 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:13.181 06:59:17 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:13.181 06:59:17 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:13.181 06:59:17 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:13.181 06:59:17 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:13.181 06:59:17 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:13.181 06:59:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:13.181 06:59:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:13.181 06:59:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:13.181 06:59:17 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:13.181 06:59:17 -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:13.181 No valid GPT data, bailing 00:03:13.181 06:59:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:13.181 06:59:17 -- scripts/common.sh@394 -- # pt= 00:03:13.181 06:59:17 -- scripts/common.sh@395 -- # return 1 00:03:13.181 06:59:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:13.181 1+0 records in 00:03:13.181 1+0 records out 00:03:13.181 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00470225 s, 223 MB/s 00:03:13.181 06:59:17 -- spdk/autotest.sh@105 -- # sync 00:03:13.181 06:59:17 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:13.181 06:59:17 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:13.181 06:59:17 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:21.394 06:59:24 -- spdk/autotest.sh@111 -- # uname -s 00:03:21.394 06:59:24 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:21.394 06:59:24 -- spdk/autotest.sh@111 -- # [[ 1 -eq 1 ]] 00:03:21.394 06:59:24 -- spdk/autotest.sh@112 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:21.394 06:59:24 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:21.394 06:59:24 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:21.394 06:59:24 -- common/autotest_common.sh@10 -- # set +x 00:03:21.394 ************************************ 00:03:21.394 START TEST setup.sh 00:03:21.394 ************************************ 00:03:21.394 06:59:25 setup.sh -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:21.394 * Looking for test storage... 00:03:21.394 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:21.394 06:59:25 setup.sh -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:21.394 06:59:25 setup.sh -- common/autotest_common.sh@1691 -- # lcov --version 00:03:21.394 06:59:25 setup.sh -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:21.394 06:59:25 setup.sh -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:21.394 06:59:25 setup.sh -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:21.394 06:59:25 setup.sh -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:21.394 06:59:25 setup.sh -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:21.394 06:59:25 setup.sh -- scripts/common.sh@336 -- # IFS=.-: 00:03:21.394 06:59:25 setup.sh -- scripts/common.sh@336 -- # read -ra ver1 00:03:21.394 06:59:25 setup.sh -- scripts/common.sh@337 -- # IFS=.-: 00:03:21.394 06:59:25 setup.sh -- scripts/common.sh@337 -- # read -ra ver2 00:03:21.394 06:59:25 setup.sh -- scripts/common.sh@338 -- # local 'op=<' 00:03:21.394 06:59:25 setup.sh -- scripts/common.sh@340 -- # ver1_l=2 00:03:21.394 06:59:25 setup.sh -- scripts/common.sh@341 -- # ver2_l=1 00:03:21.394 06:59:25 setup.sh -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:21.394 06:59:25 setup.sh -- scripts/common.sh@344 -- # case "$op" in 00:03:21.394 06:59:25 setup.sh -- scripts/common.sh@345 -- # : 1 00:03:21.394 06:59:25 setup.sh -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:21.394 06:59:25 setup.sh -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:21.394 06:59:25 setup.sh -- scripts/common.sh@365 -- # decimal 1 00:03:21.394 06:59:25 setup.sh -- scripts/common.sh@353 -- # local d=1 00:03:21.394 06:59:25 setup.sh -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:21.394 06:59:25 setup.sh -- scripts/common.sh@355 -- # echo 1 00:03:21.394 06:59:25 setup.sh -- scripts/common.sh@365 -- # ver1[v]=1 00:03:21.394 06:59:25 setup.sh -- scripts/common.sh@366 -- # decimal 2 00:03:21.394 06:59:25 setup.sh -- scripts/common.sh@353 -- # local d=2 00:03:21.394 06:59:25 setup.sh -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:21.394 06:59:25 setup.sh -- scripts/common.sh@355 -- # echo 2 00:03:21.394 06:59:25 setup.sh -- scripts/common.sh@366 -- # ver2[v]=2 00:03:21.394 06:59:25 setup.sh -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:21.394 06:59:25 setup.sh -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:21.394 06:59:25 setup.sh -- scripts/common.sh@368 -- # return 0 00:03:21.394 06:59:25 setup.sh -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:21.394 06:59:25 setup.sh -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:21.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.394 --rc genhtml_branch_coverage=1 00:03:21.394 --rc genhtml_function_coverage=1 00:03:21.394 --rc genhtml_legend=1 00:03:21.394 --rc geninfo_all_blocks=1 00:03:21.394 --rc geninfo_unexecuted_blocks=1 00:03:21.394 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:21.394 ' 00:03:21.394 06:59:25 setup.sh -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:21.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.394 --rc genhtml_branch_coverage=1 00:03:21.394 --rc genhtml_function_coverage=1 00:03:21.394 --rc genhtml_legend=1 00:03:21.394 --rc geninfo_all_blocks=1 00:03:21.394 --rc geninfo_unexecuted_blocks=1 00:03:21.394 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:21.394 ' 00:03:21.395 06:59:25 setup.sh -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:21.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.395 --rc genhtml_branch_coverage=1 00:03:21.395 --rc genhtml_function_coverage=1 00:03:21.395 --rc genhtml_legend=1 00:03:21.395 --rc geninfo_all_blocks=1 00:03:21.395 --rc geninfo_unexecuted_blocks=1 00:03:21.395 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:21.395 ' 00:03:21.395 06:59:25 setup.sh -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:21.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.395 --rc genhtml_branch_coverage=1 00:03:21.395 --rc genhtml_function_coverage=1 00:03:21.395 --rc genhtml_legend=1 00:03:21.395 --rc geninfo_all_blocks=1 00:03:21.395 --rc geninfo_unexecuted_blocks=1 00:03:21.395 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:21.395 ' 00:03:21.395 06:59:25 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:21.395 06:59:25 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:21.395 06:59:25 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:21.395 06:59:25 setup.sh -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:21.395 06:59:25 setup.sh -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:21.395 06:59:25 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:21.395 ************************************ 00:03:21.395 START TEST acl 00:03:21.395 ************************************ 00:03:21.395 06:59:25 setup.sh.acl -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:21.395 * Looking for test storage... 00:03:21.395 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:21.395 06:59:25 setup.sh.acl -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:21.395 06:59:25 setup.sh.acl -- common/autotest_common.sh@1691 -- # lcov --version 00:03:21.395 06:59:25 setup.sh.acl -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:21.395 06:59:25 setup.sh.acl -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:21.395 06:59:25 setup.sh.acl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:21.395 06:59:25 setup.sh.acl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:21.395 06:59:25 setup.sh.acl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:21.395 06:59:25 setup.sh.acl -- scripts/common.sh@336 -- # IFS=.-: 00:03:21.395 06:59:25 setup.sh.acl -- scripts/common.sh@336 -- # read -ra ver1 00:03:21.395 06:59:25 setup.sh.acl -- scripts/common.sh@337 -- # IFS=.-: 00:03:21.395 06:59:25 setup.sh.acl -- scripts/common.sh@337 -- # read -ra ver2 00:03:21.395 06:59:25 setup.sh.acl -- scripts/common.sh@338 -- # local 'op=<' 00:03:21.395 06:59:25 setup.sh.acl -- scripts/common.sh@340 -- # ver1_l=2 00:03:21.395 06:59:25 setup.sh.acl -- scripts/common.sh@341 -- # ver2_l=1 00:03:21.395 06:59:25 setup.sh.acl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:21.395 06:59:25 setup.sh.acl -- scripts/common.sh@344 -- # case "$op" in 00:03:21.395 06:59:25 setup.sh.acl -- scripts/common.sh@345 -- # : 1 00:03:21.395 06:59:25 setup.sh.acl -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:21.395 06:59:25 setup.sh.acl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:21.395 06:59:25 setup.sh.acl -- scripts/common.sh@365 -- # decimal 1 00:03:21.395 06:59:25 setup.sh.acl -- scripts/common.sh@353 -- # local d=1 00:03:21.395 06:59:25 setup.sh.acl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:21.395 06:59:25 setup.sh.acl -- scripts/common.sh@355 -- # echo 1 00:03:21.395 06:59:25 setup.sh.acl -- scripts/common.sh@365 -- # ver1[v]=1 00:03:21.395 06:59:25 setup.sh.acl -- scripts/common.sh@366 -- # decimal 2 00:03:21.395 06:59:25 setup.sh.acl -- scripts/common.sh@353 -- # local d=2 00:03:21.395 06:59:25 setup.sh.acl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:21.395 06:59:25 setup.sh.acl -- scripts/common.sh@355 -- # echo 2 00:03:21.395 06:59:25 setup.sh.acl -- scripts/common.sh@366 -- # ver2[v]=2 00:03:21.395 06:59:25 setup.sh.acl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:21.395 06:59:25 setup.sh.acl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:21.395 06:59:25 setup.sh.acl -- scripts/common.sh@368 -- # return 0 00:03:21.395 06:59:25 setup.sh.acl -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:21.395 06:59:25 setup.sh.acl -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:21.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.395 --rc genhtml_branch_coverage=1 00:03:21.395 --rc genhtml_function_coverage=1 00:03:21.395 --rc genhtml_legend=1 00:03:21.395 --rc geninfo_all_blocks=1 00:03:21.395 --rc geninfo_unexecuted_blocks=1 00:03:21.395 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:21.395 ' 00:03:21.395 06:59:25 setup.sh.acl -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:21.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.395 --rc genhtml_branch_coverage=1 00:03:21.395 --rc genhtml_function_coverage=1 00:03:21.395 --rc genhtml_legend=1 00:03:21.395 --rc geninfo_all_blocks=1 00:03:21.395 --rc geninfo_unexecuted_blocks=1 00:03:21.395 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:21.395 ' 00:03:21.395 06:59:25 setup.sh.acl -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:21.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.395 --rc genhtml_branch_coverage=1 00:03:21.395 --rc genhtml_function_coverage=1 00:03:21.395 --rc genhtml_legend=1 00:03:21.395 --rc geninfo_all_blocks=1 00:03:21.395 --rc geninfo_unexecuted_blocks=1 00:03:21.395 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:21.395 ' 00:03:21.395 06:59:25 setup.sh.acl -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:21.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.395 --rc genhtml_branch_coverage=1 00:03:21.395 --rc genhtml_function_coverage=1 00:03:21.395 --rc genhtml_legend=1 00:03:21.395 --rc geninfo_all_blocks=1 00:03:21.395 --rc geninfo_unexecuted_blocks=1 00:03:21.395 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:21.395 ' 00:03:21.395 06:59:25 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:21.395 06:59:25 setup.sh.acl -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:21.395 06:59:25 setup.sh.acl -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:21.395 06:59:25 setup.sh.acl -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:21.395 06:59:25 setup.sh.acl -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:21.395 06:59:25 setup.sh.acl -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:21.395 06:59:25 setup.sh.acl -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:21.395 06:59:25 setup.sh.acl -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:21.395 06:59:25 setup.sh.acl -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:21.395 06:59:25 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:21.395 06:59:25 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:21.395 06:59:25 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:21.395 06:59:25 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:21.395 06:59:25 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:21.395 06:59:25 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:21.395 06:59:25 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:25.591 06:59:29 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:25.591 06:59:29 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:25.591 06:59:29 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:25.591 06:59:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.591 06:59:29 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.591 06:59:29 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:03:28.128 Hugepages 00:03:28.128 node hugesize free / total 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.128 00:03:28.128 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:28.128 06:59:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.388 06:59:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:28.388 06:59:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:28.388 06:59:32 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:28.388 06:59:32 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:28.388 06:59:32 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:28.388 06:59:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.388 06:59:32 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:28.388 06:59:32 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:28.388 06:59:32 setup.sh.acl -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:28.388 06:59:32 setup.sh.acl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:28.388 06:59:32 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:28.388 ************************************ 00:03:28.388 START TEST denied 00:03:28.388 ************************************ 00:03:28.388 06:59:32 setup.sh.acl.denied -- common/autotest_common.sh@1127 -- # denied 00:03:28.388 06:59:32 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:28.388 06:59:32 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:28.388 06:59:32 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:28.388 06:59:32 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.388 06:59:32 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:31.685 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:31.685 06:59:36 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:31.685 06:59:36 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:31.686 06:59:36 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:31.686 06:59:36 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:31.686 06:59:36 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:31.686 06:59:36 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:31.686 06:59:36 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:31.686 06:59:36 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:31.686 06:59:36 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:31.686 06:59:36 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:36.961 00:03:36.961 real 0m8.009s 00:03:36.961 user 0m2.580s 00:03:36.961 sys 0m4.819s 00:03:36.961 06:59:40 setup.sh.acl.denied -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:36.961 06:59:40 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:36.961 ************************************ 00:03:36.961 END TEST denied 00:03:36.961 ************************************ 00:03:36.961 06:59:40 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:36.961 06:59:40 setup.sh.acl -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:36.961 06:59:40 setup.sh.acl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:36.961 06:59:40 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:36.961 ************************************ 00:03:36.961 START TEST allowed 00:03:36.961 ************************************ 00:03:36.961 06:59:40 setup.sh.acl.allowed -- common/autotest_common.sh@1127 -- # allowed 00:03:36.961 06:59:40 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:36.961 06:59:40 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:36.961 06:59:40 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.961 06:59:40 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:36.961 06:59:40 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:41.159 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:41.159 06:59:45 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:41.159 06:59:45 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:41.159 06:59:45 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:41.159 06:59:45 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:41.159 06:59:45 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:45.360 00:03:45.360 real 0m8.286s 00:03:45.360 user 0m2.249s 00:03:45.360 sys 0m4.667s 00:03:45.360 06:59:49 setup.sh.acl.allowed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:45.360 06:59:49 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:45.360 ************************************ 00:03:45.360 END TEST allowed 00:03:45.360 ************************************ 00:03:45.360 00:03:45.360 real 0m23.898s 00:03:45.360 user 0m7.534s 00:03:45.360 sys 0m14.662s 00:03:45.360 06:59:49 setup.sh.acl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:45.360 06:59:49 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:45.360 ************************************ 00:03:45.360 END TEST acl 00:03:45.360 ************************************ 00:03:45.360 06:59:49 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:45.360 06:59:49 setup.sh -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:45.360 06:59:49 setup.sh -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:45.360 06:59:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:45.360 ************************************ 00:03:45.360 START TEST hugepages 00:03:45.360 ************************************ 00:03:45.360 06:59:49 setup.sh.hugepages -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:45.360 * Looking for test storage... 00:03:45.360 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:45.360 06:59:49 setup.sh.hugepages -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:45.360 06:59:49 setup.sh.hugepages -- common/autotest_common.sh@1691 -- # lcov --version 00:03:45.360 06:59:49 setup.sh.hugepages -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:45.360 06:59:49 setup.sh.hugepages -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:45.360 06:59:49 setup.sh.hugepages -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:45.360 06:59:49 setup.sh.hugepages -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:45.360 06:59:49 setup.sh.hugepages -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:45.360 06:59:49 setup.sh.hugepages -- scripts/common.sh@336 -- # IFS=.-: 00:03:45.360 06:59:49 setup.sh.hugepages -- scripts/common.sh@336 -- # read -ra ver1 00:03:45.360 06:59:49 setup.sh.hugepages -- scripts/common.sh@337 -- # IFS=.-: 00:03:45.360 06:59:49 setup.sh.hugepages -- scripts/common.sh@337 -- # read -ra ver2 00:03:45.360 06:59:49 setup.sh.hugepages -- scripts/common.sh@338 -- # local 'op=<' 00:03:45.360 06:59:49 setup.sh.hugepages -- scripts/common.sh@340 -- # ver1_l=2 00:03:45.360 06:59:49 setup.sh.hugepages -- scripts/common.sh@341 -- # ver2_l=1 00:03:45.360 06:59:49 setup.sh.hugepages -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:45.360 06:59:49 setup.sh.hugepages -- scripts/common.sh@344 -- # case "$op" in 00:03:45.360 06:59:49 setup.sh.hugepages -- scripts/common.sh@345 -- # : 1 00:03:45.360 06:59:49 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:45.360 06:59:49 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:45.360 06:59:49 setup.sh.hugepages -- scripts/common.sh@365 -- # decimal 1 00:03:45.360 06:59:49 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=1 00:03:45.360 06:59:49 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:45.360 06:59:49 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 1 00:03:45.360 06:59:49 setup.sh.hugepages -- scripts/common.sh@365 -- # ver1[v]=1 00:03:45.360 06:59:49 setup.sh.hugepages -- scripts/common.sh@366 -- # decimal 2 00:03:45.360 06:59:49 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=2 00:03:45.360 06:59:49 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:45.360 06:59:49 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 2 00:03:45.360 06:59:49 setup.sh.hugepages -- scripts/common.sh@366 -- # ver2[v]=2 00:03:45.360 06:59:49 setup.sh.hugepages -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:45.360 06:59:49 setup.sh.hugepages -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:45.360 06:59:49 setup.sh.hugepages -- scripts/common.sh@368 -- # return 0 00:03:45.360 06:59:49 setup.sh.hugepages -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:45.360 06:59:49 setup.sh.hugepages -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:45.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.360 --rc genhtml_branch_coverage=1 00:03:45.360 --rc genhtml_function_coverage=1 00:03:45.360 --rc genhtml_legend=1 00:03:45.360 --rc geninfo_all_blocks=1 00:03:45.360 --rc geninfo_unexecuted_blocks=1 00:03:45.360 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:45.360 ' 00:03:45.360 06:59:49 setup.sh.hugepages -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:45.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.360 --rc genhtml_branch_coverage=1 00:03:45.360 --rc genhtml_function_coverage=1 00:03:45.360 --rc genhtml_legend=1 00:03:45.360 --rc geninfo_all_blocks=1 00:03:45.360 --rc geninfo_unexecuted_blocks=1 00:03:45.360 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:45.360 ' 00:03:45.360 06:59:49 setup.sh.hugepages -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:45.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.360 --rc genhtml_branch_coverage=1 00:03:45.360 --rc genhtml_function_coverage=1 00:03:45.360 --rc genhtml_legend=1 00:03:45.360 --rc geninfo_all_blocks=1 00:03:45.360 --rc geninfo_unexecuted_blocks=1 00:03:45.360 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:45.360 ' 00:03:45.360 06:59:49 setup.sh.hugepages -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:45.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.360 --rc genhtml_branch_coverage=1 00:03:45.360 --rc genhtml_function_coverage=1 00:03:45.360 --rc genhtml_legend=1 00:03:45.360 --rc geninfo_all_blocks=1 00:03:45.360 --rc geninfo_unexecuted_blocks=1 00:03:45.360 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:45.360 ' 00:03:45.360 06:59:49 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:45.360 06:59:49 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:45.360 06:59:49 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:45.360 06:59:49 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:45.360 06:59:49 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:45.360 06:59:49 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:45.360 06:59:49 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:45.360 06:59:49 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:45.360 06:59:49 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:45.360 06:59:49 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:45.360 06:59:49 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.360 06:59:49 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.360 06:59:49 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.360 06:59:49 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.360 06:59:49 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.360 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.360 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 39662296 kB' 'MemAvailable: 43961340 kB' 'Buffers: 10016 kB' 'Cached: 11970396 kB' 'SwapCached: 0 kB' 'Active: 8448008 kB' 'Inactive: 4026424 kB' 'Active(anon): 8027112 kB' 'Inactive(anon): 0 kB' 'Active(file): 420896 kB' 'Inactive(file): 4026424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 497460 kB' 'Mapped: 181520 kB' 'Shmem: 7533092 kB' 'KReclaimable: 499372 kB' 'Slab: 1292596 kB' 'SReclaimable: 499372 kB' 'SUnreclaim: 793224 kB' 'KernelStack: 21856 kB' 'PageTables: 7924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36433348 kB' 'Committed_AS: 9246996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214224 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 10616832 kB' 'DirectMap1G: 58720256 kB' 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.361 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGEMEM 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGENODE 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v NRHUGE 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/hugepages.sh@197 -- # get_nodes 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/hugepages.sh@26 -- # local node 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/hugepages.sh@198 -- # clear_hp 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:03:45.362 06:59:49 setup.sh.hugepages -- setup/hugepages.sh@200 -- # run_test single_node_setup single_node_setup 00:03:45.362 06:59:49 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:45.362 06:59:49 setup.sh.hugepages -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:45.362 06:59:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:45.362 ************************************ 00:03:45.362 START TEST single_node_setup 00:03:45.362 ************************************ 00:03:45.362 06:59:49 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1127 -- # single_node_setup 00:03:45.362 06:59:49 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@135 -- # get_test_nr_hugepages 2097152 0 00:03:45.362 06:59:49 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@48 -- # local size=2097152 00:03:45.362 06:59:49 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:03:45.362 06:59:49 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@50 -- # shift 00:03:45.362 06:59:49 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # node_ids=('0') 00:03:45.362 06:59:49 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # local node_ids 00:03:45.362 06:59:49 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:45.362 06:59:49 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:45.363 06:59:49 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:03:45.363 06:59:49 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:03:45.363 06:59:49 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # local user_nodes 00:03:45.363 06:59:49 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:45.363 06:59:49 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:45.363 06:59:49 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:45.363 06:59:49 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:45.363 06:59:49 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:03:45.363 06:59:49 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:03:45.363 06:59:49 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:03:45.363 06:59:49 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@72 -- # return 0 00:03:45.363 06:59:49 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # NRHUGE=1024 00:03:45.363 06:59:49 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # HUGENODE=0 00:03:45.363 06:59:49 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # setup output 00:03:45.363 06:59:49 setup.sh.hugepages.single_node_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.363 06:59:49 setup.sh.hugepages.single_node_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:48.657 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:48.657 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:48.657 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:48.657 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:48.657 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:48.657 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:48.657 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:48.657 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:48.657 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:48.657 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:48.657 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:48.657 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:48.657 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:48.657 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:48.657 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:48.657 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:50.040 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:50.040 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@137 -- # verify_nr_hugepages 00:03:50.040 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@88 -- # local node 00:03:50.040 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@89 -- # local sorted_t 00:03:50.040 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@90 -- # local sorted_s 00:03:50.040 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@91 -- # local surp 00:03:50.040 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@92 -- # local resv 00:03:50.040 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@93 -- # local anon 00:03:50.040 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:50.040 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:50.040 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 41822076 kB' 'MemAvailable: 46121080 kB' 'Buffers: 10016 kB' 'Cached: 11970556 kB' 'SwapCached: 0 kB' 'Active: 8451300 kB' 'Inactive: 4026424 kB' 'Active(anon): 8030404 kB' 'Inactive(anon): 0 kB' 'Active(file): 420896 kB' 'Inactive(file): 4026424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500464 kB' 'Mapped: 181684 kB' 'Shmem: 7533252 kB' 'KReclaimable: 499332 kB' 'Slab: 1290840 kB' 'SReclaimable: 499332 kB' 'SUnreclaim: 791508 kB' 'KernelStack: 21984 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 9255508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214288 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 10616832 kB' 'DirectMap1G: 58720256 kB' 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.041 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # anon=0 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.042 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 41821796 kB' 'MemAvailable: 46120800 kB' 'Buffers: 10016 kB' 'Cached: 11970560 kB' 'SwapCached: 0 kB' 'Active: 8451264 kB' 'Inactive: 4026424 kB' 'Active(anon): 8030368 kB' 'Inactive(anon): 0 kB' 'Active(file): 420896 kB' 'Inactive(file): 4026424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500396 kB' 'Mapped: 181612 kB' 'Shmem: 7533256 kB' 'KReclaimable: 499332 kB' 'Slab: 1290804 kB' 'SReclaimable: 499332 kB' 'SUnreclaim: 791472 kB' 'KernelStack: 22144 kB' 'PageTables: 8236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 9255368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214416 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 10616832 kB' 'DirectMap1G: 58720256 kB' 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.043 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.044 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # surp=0 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 41821972 kB' 'MemAvailable: 46120976 kB' 'Buffers: 10016 kB' 'Cached: 11970576 kB' 'SwapCached: 0 kB' 'Active: 8451284 kB' 'Inactive: 4026424 kB' 'Active(anon): 8030388 kB' 'Inactive(anon): 0 kB' 'Active(file): 420896 kB' 'Inactive(file): 4026424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500336 kB' 'Mapped: 181612 kB' 'Shmem: 7533272 kB' 'KReclaimable: 499332 kB' 'Slab: 1290804 kB' 'SReclaimable: 499332 kB' 'SUnreclaim: 791472 kB' 'KernelStack: 21952 kB' 'PageTables: 8124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 9255548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214368 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 10616832 kB' 'DirectMap1G: 58720256 kB' 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.045 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.309 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.310 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # resv=0 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:50.311 nr_hugepages=1024 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:50.311 resv_hugepages=0 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:50.311 surplus_hugepages=0 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:50.311 anon_hugepages=0 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 41821528 kB' 'MemAvailable: 46120532 kB' 'Buffers: 10016 kB' 'Cached: 11970596 kB' 'SwapCached: 0 kB' 'Active: 8451236 kB' 'Inactive: 4026424 kB' 'Active(anon): 8030340 kB' 'Inactive(anon): 0 kB' 'Active(file): 420896 kB' 'Inactive(file): 4026424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500776 kB' 'Mapped: 181612 kB' 'Shmem: 7533292 kB' 'KReclaimable: 499332 kB' 'Slab: 1290804 kB' 'SReclaimable: 499332 kB' 'SUnreclaim: 791472 kB' 'KernelStack: 22096 kB' 'PageTables: 8532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 9255572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214464 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 10616832 kB' 'DirectMap1G: 58720256 kB' 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.311 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.312 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 1024 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@111 -- # get_nodes 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@26 -- # local node 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node=0 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.313 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 25775048 kB' 'MemUsed: 6859388 kB' 'SwapCached: 0 kB' 'Active: 2708048 kB' 'Inactive: 308676 kB' 'Active(anon): 2582268 kB' 'Inactive(anon): 0 kB' 'Active(file): 125780 kB' 'Inactive(file): 308676 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2844296 kB' 'Mapped: 82484 kB' 'AnonPages: 175616 kB' 'Shmem: 2409840 kB' 'KernelStack: 11608 kB' 'PageTables: 4528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 342428 kB' 'Slab: 713644 kB' 'SReclaimable: 342428 kB' 'SUnreclaim: 371216 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.314 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:03:50.315 node0=1024 expecting 1024 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:03:50.315 00:03:50.315 real 0m5.151s 00:03:50.315 user 0m1.429s 00:03:50.315 sys 0m2.317s 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:50.315 06:59:54 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@10 -- # set +x 00:03:50.315 ************************************ 00:03:50.316 END TEST single_node_setup 00:03:50.316 ************************************ 00:03:50.316 06:59:54 setup.sh.hugepages -- setup/hugepages.sh@201 -- # run_test even_2G_alloc even_2G_alloc 00:03:50.316 06:59:54 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:50.316 06:59:54 setup.sh.hugepages -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:50.316 06:59:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:50.316 ************************************ 00:03:50.316 START TEST even_2G_alloc 00:03:50.316 ************************************ 00:03:50.316 06:59:54 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1127 -- # even_2G_alloc 00:03:50.316 06:59:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@142 -- # get_test_nr_hugepages 2097152 00:03:50.316 06:59:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:03:50.316 06:59:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:50.316 06:59:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:50.316 06:59:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:50.316 06:59:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:50.316 06:59:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:50.316 06:59:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:50.316 06:59:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:50.316 06:59:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:50.316 06:59:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:50.316 06:59:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:50.316 06:59:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:50.316 06:59:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:03:50.316 06:59:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:50.316 06:59:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:03:50.316 06:59:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 512 00:03:50.316 06:59:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 1 00:03:50.316 06:59:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:50.316 06:59:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:03:50.316 06:59:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 0 00:03:50.316 06:59:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:50.316 06:59:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:50.316 06:59:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # NRHUGE=1024 00:03:50.316 06:59:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # setup output 00:03:50.316 06:59:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.316 06:59:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:53.612 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:53.612 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:53.612 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:53.612 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:53.612 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:53.612 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:53.612 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:53.612 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:53.612 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:53.612 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:53.612 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:53.612 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:53.612 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:53.612 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:53.612 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:53.612 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:53.612 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:53.612 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@144 -- # verify_nr_hugepages 00:03:53.612 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@88 -- # local node 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 41824596 kB' 'MemAvailable: 46123600 kB' 'Buffers: 10016 kB' 'Cached: 11970880 kB' 'SwapCached: 0 kB' 'Active: 8451012 kB' 'Inactive: 4026424 kB' 'Active(anon): 8030116 kB' 'Inactive(anon): 0 kB' 'Active(file): 420896 kB' 'Inactive(file): 4026424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499868 kB' 'Mapped: 180688 kB' 'Shmem: 7533576 kB' 'KReclaimable: 499332 kB' 'Slab: 1290412 kB' 'SReclaimable: 499332 kB' 'SUnreclaim: 791080 kB' 'KernelStack: 21840 kB' 'PageTables: 7784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 9238764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214384 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 10616832 kB' 'DirectMap1G: 58720256 kB' 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.613 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:53.614 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 41825276 kB' 'MemAvailable: 46124280 kB' 'Buffers: 10016 kB' 'Cached: 11970884 kB' 'SwapCached: 0 kB' 'Active: 8450836 kB' 'Inactive: 4026424 kB' 'Active(anon): 8029940 kB' 'Inactive(anon): 0 kB' 'Active(file): 420896 kB' 'Inactive(file): 4026424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499612 kB' 'Mapped: 180556 kB' 'Shmem: 7533580 kB' 'KReclaimable: 499332 kB' 'Slab: 1290420 kB' 'SReclaimable: 499332 kB' 'SUnreclaim: 791088 kB' 'KernelStack: 21824 kB' 'PageTables: 7708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 9238784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214352 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 10616832 kB' 'DirectMap1G: 58720256 kB' 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.615 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.616 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.882 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 41824044 kB' 'MemAvailable: 46123048 kB' 'Buffers: 10016 kB' 'Cached: 11970884 kB' 'SwapCached: 0 kB' 'Active: 8450796 kB' 'Inactive: 4026424 kB' 'Active(anon): 8029900 kB' 'Inactive(anon): 0 kB' 'Active(file): 420896 kB' 'Inactive(file): 4026424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499612 kB' 'Mapped: 180556 kB' 'Shmem: 7533580 kB' 'KReclaimable: 499332 kB' 'Slab: 1290420 kB' 'SReclaimable: 499332 kB' 'SUnreclaim: 791088 kB' 'KernelStack: 21824 kB' 'PageTables: 7708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 9238804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214352 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 10616832 kB' 'DirectMap1G: 58720256 kB' 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.883 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.884 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:53.885 nr_hugepages=1024 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:53.885 resv_hugepages=0 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:53.885 surplus_hugepages=0 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:53.885 anon_hugepages=0 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 41822904 kB' 'MemAvailable: 46121908 kB' 'Buffers: 10016 kB' 'Cached: 11970920 kB' 'SwapCached: 0 kB' 'Active: 8451096 kB' 'Inactive: 4026424 kB' 'Active(anon): 8030200 kB' 'Inactive(anon): 0 kB' 'Active(file): 420896 kB' 'Inactive(file): 4026424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499940 kB' 'Mapped: 180556 kB' 'Shmem: 7533616 kB' 'KReclaimable: 499332 kB' 'Slab: 1290424 kB' 'SReclaimable: 499332 kB' 'SUnreclaim: 791092 kB' 'KernelStack: 21824 kB' 'PageTables: 7704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 9238824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214320 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 10616832 kB' 'DirectMap1G: 58720256 kB' 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.885 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@26 -- # local node 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:53.886 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 26817552 kB' 'MemUsed: 5816884 kB' 'SwapCached: 0 kB' 'Active: 2707236 kB' 'Inactive: 308676 kB' 'Active(anon): 2581456 kB' 'Inactive(anon): 0 kB' 'Active(file): 125780 kB' 'Inactive(file): 308676 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2844476 kB' 'Mapped: 81688 kB' 'AnonPages: 174684 kB' 'Shmem: 2410020 kB' 'KernelStack: 11320 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 342428 kB' 'Slab: 713284 kB' 'SReclaimable: 342428 kB' 'SUnreclaim: 370856 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.887 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27649360 kB' 'MemFree: 15005808 kB' 'MemUsed: 12643552 kB' 'SwapCached: 0 kB' 'Active: 5743516 kB' 'Inactive: 3717748 kB' 'Active(anon): 5448400 kB' 'Inactive(anon): 0 kB' 'Active(file): 295116 kB' 'Inactive(file): 3717748 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9136500 kB' 'Mapped: 98872 kB' 'AnonPages: 324888 kB' 'Shmem: 5123636 kB' 'KernelStack: 10488 kB' 'PageTables: 3544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 156904 kB' 'Slab: 577124 kB' 'SReclaimable: 156904 kB' 'SUnreclaim: 420220 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.888 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:53.889 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:53.890 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:53.890 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:53.890 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:03:53.890 node0=512 expecting 512 00:03:53.890 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:53.890 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:53.890 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:53.890 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:03:53.890 node1=512 expecting 512 00:03:53.890 06:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@129 -- # [[ 512 == \5\1\2 ]] 00:03:53.890 00:03:53.890 real 0m3.535s 00:03:53.890 user 0m1.295s 00:03:53.890 sys 0m2.304s 00:03:53.890 06:59:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:53.890 06:59:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:53.890 ************************************ 00:03:53.890 END TEST even_2G_alloc 00:03:53.890 ************************************ 00:03:53.890 06:59:58 setup.sh.hugepages -- setup/hugepages.sh@202 -- # run_test odd_alloc odd_alloc 00:03:53.890 06:59:58 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:53.890 06:59:58 setup.sh.hugepages -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:53.890 06:59:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:53.890 ************************************ 00:03:53.890 START TEST odd_alloc 00:03:53.890 ************************************ 00:03:53.890 06:59:58 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1127 -- # odd_alloc 00:03:53.890 06:59:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@149 -- # get_test_nr_hugepages 2098176 00:03:53.890 06:59:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@48 -- # local size=2098176 00:03:53.890 06:59:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:53.890 06:59:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:53.890 06:59:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1025 00:03:53.890 06:59:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:53.890 06:59:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:53.890 06:59:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:53.890 06:59:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1025 00:03:53.890 06:59:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:53.890 06:59:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:53.890 06:59:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:53.890 06:59:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:53.890 06:59:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:03:53.890 06:59:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:53.890 06:59:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:03:53.890 06:59:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 513 00:03:53.890 06:59:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 1 00:03:53.890 06:59:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:53.890 06:59:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=513 00:03:53.890 06:59:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 0 00:03:53.890 06:59:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:53.890 06:59:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:53.890 06:59:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # HUGEMEM=2049 00:03:53.890 06:59:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # setup output 00:03:53.890 06:59:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.890 06:59:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:57.196 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:57.196 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:57.196 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:57.196 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:57.196 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:57.196 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:57.196 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:57.196 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:57.196 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:57.196 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:57.196 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:57.196 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:57.196 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:57.196 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:57.196 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:57.196 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:57.196 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:57.196 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@151 -- # verify_nr_hugepages 00:03:57.196 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@88 -- # local node 00:03:57.196 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:57.196 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:57.196 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:57.196 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 41787336 kB' 'MemAvailable: 46086324 kB' 'Buffers: 10016 kB' 'Cached: 11971048 kB' 'SwapCached: 0 kB' 'Active: 8454840 kB' 'Inactive: 4026424 kB' 'Active(anon): 8033944 kB' 'Inactive(anon): 0 kB' 'Active(file): 420896 kB' 'Inactive(file): 4026424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 503088 kB' 'Mapped: 181176 kB' 'Shmem: 7533744 kB' 'KReclaimable: 499316 kB' 'Slab: 1290440 kB' 'SReclaimable: 499316 kB' 'SUnreclaim: 791124 kB' 'KernelStack: 21952 kB' 'PageTables: 8076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480900 kB' 'Committed_AS: 9247836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214272 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 10616832 kB' 'DirectMap1G: 58720256 kB' 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.197 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 41782844 kB' 'MemAvailable: 46081832 kB' 'Buffers: 10016 kB' 'Cached: 11971052 kB' 'SwapCached: 0 kB' 'Active: 8456840 kB' 'Inactive: 4026424 kB' 'Active(anon): 8035944 kB' 'Inactive(anon): 0 kB' 'Active(file): 420896 kB' 'Inactive(file): 4026424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505616 kB' 'Mapped: 181072 kB' 'Shmem: 7533748 kB' 'KReclaimable: 499316 kB' 'Slab: 1290440 kB' 'SReclaimable: 499316 kB' 'SUnreclaim: 791124 kB' 'KernelStack: 21936 kB' 'PageTables: 8016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480900 kB' 'Committed_AS: 9250108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214228 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 10616832 kB' 'DirectMap1G: 58720256 kB' 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 41783076 kB' 'MemAvailable: 46082064 kB' 'Buffers: 10016 kB' 'Cached: 11971068 kB' 'SwapCached: 0 kB' 'Active: 8453508 kB' 'Inactive: 4026424 kB' 'Active(anon): 8032612 kB' 'Inactive(anon): 0 kB' 'Active(file): 420896 kB' 'Inactive(file): 4026424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502284 kB' 'Mapped: 181072 kB' 'Shmem: 7533764 kB' 'KReclaimable: 499316 kB' 'Slab: 1290504 kB' 'SReclaimable: 499316 kB' 'SUnreclaim: 791188 kB' 'KernelStack: 21936 kB' 'PageTables: 8032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480900 kB' 'Committed_AS: 9246552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214224 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 10616832 kB' 'DirectMap1G: 58720256 kB' 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.200 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1025 00:03:57.202 nr_hugepages=1025 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:57.202 resv_hugepages=0 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:57.202 surplus_hugepages=0 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:57.202 anon_hugepages=0 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@106 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@108 -- # (( 1025 == nr_hugepages )) 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 41783820 kB' 'MemAvailable: 46082808 kB' 'Buffers: 10016 kB' 'Cached: 11971088 kB' 'SwapCached: 0 kB' 'Active: 8457008 kB' 'Inactive: 4026424 kB' 'Active(anon): 8036112 kB' 'Inactive(anon): 0 kB' 'Active(file): 420896 kB' 'Inactive(file): 4026424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505728 kB' 'Mapped: 181424 kB' 'Shmem: 7533784 kB' 'KReclaimable: 499316 kB' 'Slab: 1290504 kB' 'SReclaimable: 499316 kB' 'SUnreclaim: 791188 kB' 'KernelStack: 21936 kB' 'PageTables: 8064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480900 kB' 'Committed_AS: 9250152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214228 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 10616832 kB' 'DirectMap1G: 58720256 kB' 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.203 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@26 -- # local node 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=513 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 26791700 kB' 'MemUsed: 5842736 kB' 'SwapCached: 0 kB' 'Active: 2708180 kB' 'Inactive: 308676 kB' 'Active(anon): 2582400 kB' 'Inactive(anon): 0 kB' 'Active(file): 125780 kB' 'Inactive(file): 308676 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2844476 kB' 'Mapped: 82200 kB' 'AnonPages: 175556 kB' 'Shmem: 2410020 kB' 'KernelStack: 11368 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 342428 kB' 'Slab: 713340 kB' 'SReclaimable: 342428 kB' 'SUnreclaim: 370912 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.204 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27649360 kB' 'MemFree: 14996392 kB' 'MemUsed: 12652968 kB' 'SwapCached: 0 kB' 'Active: 5747180 kB' 'Inactive: 3717748 kB' 'Active(anon): 5452064 kB' 'Inactive(anon): 0 kB' 'Active(file): 295116 kB' 'Inactive(file): 3717748 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9136668 kB' 'Mapped: 98872 kB' 'AnonPages: 328456 kB' 'Shmem: 5123804 kB' 'KernelStack: 10568 kB' 'PageTables: 3820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 156888 kB' 'Slab: 577172 kB' 'SReclaimable: 156888 kB' 'SUnreclaim: 420284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.205 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.206 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node0=513 expecting 513' 00:03:57.207 node0=513 expecting 513 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:03:57.207 node1=512 expecting 512 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@129 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:57.207 00:03:57.207 real 0m3.176s 00:03:57.207 user 0m1.112s 00:03:57.207 sys 0m2.068s 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:57.207 07:00:01 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:57.207 ************************************ 00:03:57.207 END TEST odd_alloc 00:03:57.207 ************************************ 00:03:57.207 07:00:01 setup.sh.hugepages -- setup/hugepages.sh@203 -- # run_test custom_alloc custom_alloc 00:03:57.207 07:00:01 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:57.207 07:00:01 setup.sh.hugepages -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:57.207 07:00:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:57.207 ************************************ 00:03:57.207 START TEST custom_alloc 00:03:57.207 ************************************ 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1127 -- # custom_alloc 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@157 -- # local IFS=, 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@159 -- # local node 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # nodes_hp=() 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # local nodes_hp 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@162 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@164 -- # get_test_nr_hugepages 1048576 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=1048576 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=512 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=512 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 256 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 1 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 0 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@165 -- # nodes_hp[0]=512 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@166 -- # (( 2 > 1 )) 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # get_test_nr_hugepages 2097152 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 1 > 0 )) 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@168 -- # nodes_hp[1]=1024 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # get_test_nr_hugepages_per_node 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 2 > 0 )) 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=1024 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # setup output 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.207 07:00:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:00.511 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:00.511 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:00.511 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:00.511 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:00.511 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:00.511 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:00.511 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:00.511 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:00.511 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:00.511 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:00.511 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:00.511 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:00.511 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:00.511 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:00.511 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:00.511 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:00.511 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nr_hugepages=1536 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # verify_nr_hugepages 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@88 -- # local node 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 40727932 kB' 'MemAvailable: 45026920 kB' 'Buffers: 10016 kB' 'Cached: 11971224 kB' 'SwapCached: 0 kB' 'Active: 8458412 kB' 'Inactive: 4026424 kB' 'Active(anon): 8037516 kB' 'Inactive(anon): 0 kB' 'Active(file): 420896 kB' 'Inactive(file): 4026424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506436 kB' 'Mapped: 181632 kB' 'Shmem: 7533920 kB' 'KReclaimable: 499316 kB' 'Slab: 1291196 kB' 'SReclaimable: 499316 kB' 'SUnreclaim: 791880 kB' 'KernelStack: 21904 kB' 'PageTables: 8004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957636 kB' 'Committed_AS: 9249312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214452 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 10616832 kB' 'DirectMap1G: 58720256 kB' 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.511 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.512 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 40731344 kB' 'MemAvailable: 45030332 kB' 'Buffers: 10016 kB' 'Cached: 11971228 kB' 'SwapCached: 0 kB' 'Active: 8457768 kB' 'Inactive: 4026424 kB' 'Active(anon): 8036872 kB' 'Inactive(anon): 0 kB' 'Active(file): 420896 kB' 'Inactive(file): 4026424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506324 kB' 'Mapped: 181500 kB' 'Shmem: 7533924 kB' 'KReclaimable: 499316 kB' 'Slab: 1291156 kB' 'SReclaimable: 499316 kB' 'SUnreclaim: 791840 kB' 'KernelStack: 21904 kB' 'PageTables: 7980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957636 kB' 'Committed_AS: 9249328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214436 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 10616832 kB' 'DirectMap1G: 58720256 kB' 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.513 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.514 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 40732548 kB' 'MemAvailable: 45031536 kB' 'Buffers: 10016 kB' 'Cached: 11971248 kB' 'SwapCached: 0 kB' 'Active: 8457744 kB' 'Inactive: 4026424 kB' 'Active(anon): 8036848 kB' 'Inactive(anon): 0 kB' 'Active(file): 420896 kB' 'Inactive(file): 4026424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506316 kB' 'Mapped: 181500 kB' 'Shmem: 7533944 kB' 'KReclaimable: 499316 kB' 'Slab: 1291156 kB' 'SReclaimable: 499316 kB' 'SUnreclaim: 791840 kB' 'KernelStack: 21904 kB' 'PageTables: 7980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957636 kB' 'Committed_AS: 9249352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214436 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 10616832 kB' 'DirectMap1G: 58720256 kB' 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.515 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.516 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1536 00:04:00.517 nr_hugepages=1536 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:00.517 resv_hugepages=0 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:00.517 surplus_hugepages=0 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:00.517 anon_hugepages=0 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@106 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@108 -- # (( 1536 == nr_hugepages )) 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.517 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 40732848 kB' 'MemAvailable: 45031836 kB' 'Buffers: 10016 kB' 'Cached: 11971264 kB' 'SwapCached: 0 kB' 'Active: 8457464 kB' 'Inactive: 4026424 kB' 'Active(anon): 8036568 kB' 'Inactive(anon): 0 kB' 'Active(file): 420896 kB' 'Inactive(file): 4026424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505976 kB' 'Mapped: 181500 kB' 'Shmem: 7533960 kB' 'KReclaimable: 499316 kB' 'Slab: 1291156 kB' 'SReclaimable: 499316 kB' 'SUnreclaim: 791840 kB' 'KernelStack: 21904 kB' 'PageTables: 7980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957636 kB' 'Committed_AS: 9249372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214436 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 10616832 kB' 'DirectMap1G: 58720256 kB' 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.518 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.519 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@26 -- # local node 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 26782020 kB' 'MemUsed: 5852416 kB' 'SwapCached: 0 kB' 'Active: 2715300 kB' 'Inactive: 308676 kB' 'Active(anon): 2589520 kB' 'Inactive(anon): 0 kB' 'Active(file): 125780 kB' 'Inactive(file): 308676 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2844480 kB' 'Mapped: 82476 kB' 'AnonPages: 182724 kB' 'Shmem: 2410024 kB' 'KernelStack: 11368 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 342428 kB' 'Slab: 713608 kB' 'SReclaimable: 342428 kB' 'SUnreclaim: 371180 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.520 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.521 07:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.521 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.521 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.521 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.521 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.521 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.521 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.521 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.521 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.521 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.521 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.521 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.521 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.521 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.521 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:00.521 07:00:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:00.521 07:00:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:00.521 07:00:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:00.521 07:00:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:04:00.521 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.521 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:00.521 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:00.521 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.521 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.521 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27649360 kB' 'MemFree: 13950924 kB' 'MemUsed: 13698436 kB' 'SwapCached: 0 kB' 'Active: 5742176 kB' 'Inactive: 3717748 kB' 'Active(anon): 5447060 kB' 'Inactive(anon): 0 kB' 'Active(file): 295116 kB' 'Inactive(file): 3717748 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9136844 kB' 'Mapped: 99024 kB' 'AnonPages: 323164 kB' 'Shmem: 5123980 kB' 'KernelStack: 10520 kB' 'PageTables: 3656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 156888 kB' 'Slab: 577548 kB' 'SReclaimable: 156888 kB' 'SUnreclaim: 420660 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.522 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:04:00.523 node0=512 expecting 512 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node1=1024 expecting 1024' 00:04:00.523 node1=1024 expecting 1024 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@129 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:00.523 00:04:00.523 real 0m3.374s 00:04:00.523 user 0m1.192s 00:04:00.523 sys 0m2.178s 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:00.523 07:00:05 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:00.523 ************************************ 00:04:00.523 END TEST custom_alloc 00:04:00.523 ************************************ 00:04:00.788 07:00:05 setup.sh.hugepages -- setup/hugepages.sh@204 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:00.788 07:00:05 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:00.788 07:00:05 setup.sh.hugepages -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:00.788 07:00:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:00.788 ************************************ 00:04:00.788 START TEST no_shrink_alloc 00:04:00.788 ************************************ 00:04:00.788 07:00:05 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1127 -- # no_shrink_alloc 00:04:00.788 07:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@185 -- # get_test_nr_hugepages 2097152 0 00:04:00.788 07:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:04:00.788 07:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:04:00.788 07:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # shift 00:04:00.788 07:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # node_ids=('0') 00:04:00.788 07:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # local node_ids 00:04:00.788 07:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:00.788 07:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:00.788 07:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:04:00.788 07:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:04:00.788 07:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:00.788 07:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:00.788 07:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:00.788 07:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:00.788 07:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:00.788 07:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:04:00.788 07:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:04:00.788 07:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:04:00.788 07:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@72 -- # return 0 00:04:00.788 07:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # NRHUGE=1024 00:04:00.788 07:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # HUGENODE=0 00:04:00.788 07:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # setup output 00:04:00.788 07:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.788 07:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:04.222 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:04.222 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:04.222 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:04.222 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:04.222 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:04.222 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:04.222 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:04.222 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:04.222 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:04.223 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:04.223 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:04.223 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:04.223 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:04.223 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:04.223 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:04.223 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:04.223 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@189 -- # verify_nr_hugepages 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 41769816 kB' 'MemAvailable: 46068804 kB' 'Buffers: 10016 kB' 'Cached: 11971384 kB' 'SwapCached: 0 kB' 'Active: 8455384 kB' 'Inactive: 4026424 kB' 'Active(anon): 8034488 kB' 'Inactive(anon): 0 kB' 'Active(file): 420896 kB' 'Inactive(file): 4026424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 503404 kB' 'Mapped: 180680 kB' 'Shmem: 7534080 kB' 'KReclaimable: 499316 kB' 'Slab: 1289888 kB' 'SReclaimable: 499316 kB' 'SUnreclaim: 790572 kB' 'KernelStack: 21888 kB' 'PageTables: 7928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 9248608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214400 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 10616832 kB' 'DirectMap1G: 58720256 kB' 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.223 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 41770604 kB' 'MemAvailable: 46069592 kB' 'Buffers: 10016 kB' 'Cached: 11971388 kB' 'SwapCached: 0 kB' 'Active: 8455948 kB' 'Inactive: 4026424 kB' 'Active(anon): 8035052 kB' 'Inactive(anon): 0 kB' 'Active(file): 420896 kB' 'Inactive(file): 4026424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504480 kB' 'Mapped: 180648 kB' 'Shmem: 7534084 kB' 'KReclaimable: 499316 kB' 'Slab: 1289892 kB' 'SReclaimable: 499316 kB' 'SUnreclaim: 790576 kB' 'KernelStack: 22112 kB' 'PageTables: 8340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 9246792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214496 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 10616832 kB' 'DirectMap1G: 58720256 kB' 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.224 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.225 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 41769976 kB' 'MemAvailable: 46068964 kB' 'Buffers: 10016 kB' 'Cached: 11971408 kB' 'SwapCached: 0 kB' 'Active: 8456304 kB' 'Inactive: 4026424 kB' 'Active(anon): 8035408 kB' 'Inactive(anon): 0 kB' 'Active(file): 420896 kB' 'Inactive(file): 4026424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504792 kB' 'Mapped: 180596 kB' 'Shmem: 7534104 kB' 'KReclaimable: 499316 kB' 'Slab: 1289888 kB' 'SReclaimable: 499316 kB' 'SUnreclaim: 790572 kB' 'KernelStack: 22144 kB' 'PageTables: 8240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 9248784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214560 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 10616832 kB' 'DirectMap1G: 58720256 kB' 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.226 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.227 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:04.228 nr_hugepages=1024 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:04.228 resv_hugepages=0 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:04.228 surplus_hugepages=0 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:04.228 anon_hugepages=0 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.228 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 41771960 kB' 'MemAvailable: 46070948 kB' 'Buffers: 10016 kB' 'Cached: 11971432 kB' 'SwapCached: 0 kB' 'Active: 8456788 kB' 'Inactive: 4026424 kB' 'Active(anon): 8035892 kB' 'Inactive(anon): 0 kB' 'Active(file): 420896 kB' 'Inactive(file): 4026424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505224 kB' 'Mapped: 180596 kB' 'Shmem: 7534128 kB' 'KReclaimable: 499316 kB' 'Slab: 1289888 kB' 'SReclaimable: 499316 kB' 'SUnreclaim: 790572 kB' 'KernelStack: 22304 kB' 'PageTables: 8852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 9248804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214640 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 10616832 kB' 'DirectMap1G: 58720256 kB' 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.229 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 25747092 kB' 'MemUsed: 6887344 kB' 'SwapCached: 0 kB' 'Active: 2713412 kB' 'Inactive: 308676 kB' 'Active(anon): 2587632 kB' 'Inactive(anon): 0 kB' 'Active(file): 125780 kB' 'Inactive(file): 308676 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2844484 kB' 'Mapped: 81716 kB' 'AnonPages: 180904 kB' 'Shmem: 2410028 kB' 'KernelStack: 11480 kB' 'PageTables: 4452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 342428 kB' 'Slab: 712780 kB' 'SReclaimable: 342428 kB' 'SUnreclaim: 370352 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:04.230 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.231 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:04.232 node0=1024 expecting 1024 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # CLEAR_HUGE=no 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # NRHUGE=512 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # HUGENODE=0 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # setup output 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.232 07:00:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:07.526 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:07.526 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:07.526 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:07.526 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:07.526 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:07.526 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:07.526 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:07.526 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:07.526 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:07.526 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:07.527 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:07.527 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:07.527 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:07.527 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:07.527 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:07.527 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:07.527 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:07.527 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@194 -- # verify_nr_hugepages 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 41808132 kB' 'MemAvailable: 46107080 kB' 'Buffers: 10016 kB' 'Cached: 11971548 kB' 'SwapCached: 0 kB' 'Active: 8453520 kB' 'Inactive: 4026424 kB' 'Active(anon): 8032624 kB' 'Inactive(anon): 0 kB' 'Active(file): 420896 kB' 'Inactive(file): 4026424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501612 kB' 'Mapped: 180652 kB' 'Shmem: 7534244 kB' 'KReclaimable: 499276 kB' 'Slab: 1290488 kB' 'SReclaimable: 499276 kB' 'SUnreclaim: 791212 kB' 'KernelStack: 21984 kB' 'PageTables: 7816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 9242748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214656 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 10616832 kB' 'DirectMap1G: 58720256 kB' 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.528 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 41816224 kB' 'MemAvailable: 46115172 kB' 'Buffers: 10016 kB' 'Cached: 11971552 kB' 'SwapCached: 0 kB' 'Active: 8452748 kB' 'Inactive: 4026424 kB' 'Active(anon): 8031852 kB' 'Inactive(anon): 0 kB' 'Active(file): 420896 kB' 'Inactive(file): 4026424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500764 kB' 'Mapped: 180600 kB' 'Shmem: 7534248 kB' 'KReclaimable: 499276 kB' 'Slab: 1290548 kB' 'SReclaimable: 499276 kB' 'SUnreclaim: 791272 kB' 'KernelStack: 21696 kB' 'PageTables: 7204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 9241532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214464 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 10616832 kB' 'DirectMap1G: 58720256 kB' 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 41816052 kB' 'MemAvailable: 46115000 kB' 'Buffers: 10016 kB' 'Cached: 11971572 kB' 'SwapCached: 0 kB' 'Active: 8453280 kB' 'Inactive: 4026424 kB' 'Active(anon): 8032384 kB' 'Inactive(anon): 0 kB' 'Active(file): 420896 kB' 'Inactive(file): 4026424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501400 kB' 'Mapped: 180600 kB' 'Shmem: 7534268 kB' 'KReclaimable: 499276 kB' 'Slab: 1290652 kB' 'SReclaimable: 499276 kB' 'SUnreclaim: 791376 kB' 'KernelStack: 21824 kB' 'PageTables: 7708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 9241552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214448 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 10616832 kB' 'DirectMap1G: 58720256 kB' 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.532 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.533 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:07.534 nr_hugepages=1024 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:07.534 resv_hugepages=0 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:07.534 surplus_hugepages=0 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:07.534 anon_hugepages=0 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 41816332 kB' 'MemAvailable: 46115280 kB' 'Buffers: 10016 kB' 'Cached: 11971616 kB' 'SwapCached: 0 kB' 'Active: 8452948 kB' 'Inactive: 4026424 kB' 'Active(anon): 8032052 kB' 'Inactive(anon): 0 kB' 'Active(file): 420896 kB' 'Inactive(file): 4026424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500976 kB' 'Mapped: 180600 kB' 'Shmem: 7534312 kB' 'KReclaimable: 499276 kB' 'Slab: 1290652 kB' 'SReclaimable: 499276 kB' 'SUnreclaim: 791376 kB' 'KernelStack: 21808 kB' 'PageTables: 7660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 9241576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214448 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 10616832 kB' 'DirectMap1G: 58720256 kB' 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.534 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.535 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.536 07:00:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 25765348 kB' 'MemUsed: 6869088 kB' 'SwapCached: 0 kB' 'Active: 2710828 kB' 'Inactive: 308676 kB' 'Active(anon): 2585048 kB' 'Inactive(anon): 0 kB' 'Active(file): 125780 kB' 'Inactive(file): 308676 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2844512 kB' 'Mapped: 81728 kB' 'AnonPages: 178168 kB' 'Shmem: 2410056 kB' 'KernelStack: 11320 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 342420 kB' 'Slab: 713340 kB' 'SReclaimable: 342420 kB' 'SUnreclaim: 370920 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.536 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.537 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:07.538 node0=1024 expecting 1024 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:07.538 00:04:07.538 real 0m6.927s 00:04:07.538 user 0m2.521s 00:04:07.538 sys 0m4.443s 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:07.538 07:00:12 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:07.538 ************************************ 00:04:07.538 END TEST no_shrink_alloc 00:04:07.538 ************************************ 00:04:07.798 07:00:12 setup.sh.hugepages -- setup/hugepages.sh@206 -- # clear_hp 00:04:07.798 07:00:12 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:04:07.798 07:00:12 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:07.798 07:00:12 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.798 07:00:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:07.798 07:00:12 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.798 07:00:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:07.798 07:00:12 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:07.798 07:00:12 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.798 07:00:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:07.798 07:00:12 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.798 07:00:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:07.798 07:00:12 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:04:07.798 07:00:12 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:04:07.798 00:04:07.798 real 0m22.845s 00:04:07.798 user 0m7.853s 00:04:07.798 sys 0m13.742s 00:04:07.798 07:00:12 setup.sh.hugepages -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:07.798 07:00:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:07.798 ************************************ 00:04:07.798 END TEST hugepages 00:04:07.798 ************************************ 00:04:07.798 07:00:12 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:07.798 07:00:12 setup.sh -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:07.798 07:00:12 setup.sh -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:07.798 07:00:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:07.798 ************************************ 00:04:07.798 START TEST driver 00:04:07.798 ************************************ 00:04:07.798 07:00:12 setup.sh.driver -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:07.798 * Looking for test storage... 00:04:07.798 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:07.798 07:00:12 setup.sh.driver -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:07.798 07:00:12 setup.sh.driver -- common/autotest_common.sh@1691 -- # lcov --version 00:04:07.798 07:00:12 setup.sh.driver -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:07.798 07:00:12 setup.sh.driver -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:07.798 07:00:12 setup.sh.driver -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.798 07:00:12 setup.sh.driver -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.798 07:00:12 setup.sh.driver -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.798 07:00:12 setup.sh.driver -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.798 07:00:12 setup.sh.driver -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.798 07:00:12 setup.sh.driver -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.798 07:00:12 setup.sh.driver -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.798 07:00:12 setup.sh.driver -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.798 07:00:12 setup.sh.driver -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.798 07:00:12 setup.sh.driver -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.798 07:00:12 setup.sh.driver -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.798 07:00:12 setup.sh.driver -- scripts/common.sh@344 -- # case "$op" in 00:04:07.798 07:00:12 setup.sh.driver -- scripts/common.sh@345 -- # : 1 00:04:07.798 07:00:12 setup.sh.driver -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.798 07:00:12 setup.sh.driver -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.798 07:00:12 setup.sh.driver -- scripts/common.sh@365 -- # decimal 1 00:04:08.057 07:00:12 setup.sh.driver -- scripts/common.sh@353 -- # local d=1 00:04:08.057 07:00:12 setup.sh.driver -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.057 07:00:12 setup.sh.driver -- scripts/common.sh@355 -- # echo 1 00:04:08.057 07:00:12 setup.sh.driver -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.057 07:00:12 setup.sh.driver -- scripts/common.sh@366 -- # decimal 2 00:04:08.057 07:00:12 setup.sh.driver -- scripts/common.sh@353 -- # local d=2 00:04:08.057 07:00:12 setup.sh.driver -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.057 07:00:12 setup.sh.driver -- scripts/common.sh@355 -- # echo 2 00:04:08.057 07:00:12 setup.sh.driver -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.057 07:00:12 setup.sh.driver -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.057 07:00:12 setup.sh.driver -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.057 07:00:12 setup.sh.driver -- scripts/common.sh@368 -- # return 0 00:04:08.057 07:00:12 setup.sh.driver -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.057 07:00:12 setup.sh.driver -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:08.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.057 --rc genhtml_branch_coverage=1 00:04:08.057 --rc genhtml_function_coverage=1 00:04:08.057 --rc genhtml_legend=1 00:04:08.057 --rc geninfo_all_blocks=1 00:04:08.057 --rc geninfo_unexecuted_blocks=1 00:04:08.057 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:08.057 ' 00:04:08.057 07:00:12 setup.sh.driver -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:08.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.057 --rc genhtml_branch_coverage=1 00:04:08.057 --rc genhtml_function_coverage=1 00:04:08.057 --rc genhtml_legend=1 00:04:08.057 --rc geninfo_all_blocks=1 00:04:08.057 --rc geninfo_unexecuted_blocks=1 00:04:08.057 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:08.057 ' 00:04:08.057 07:00:12 setup.sh.driver -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:08.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.057 --rc genhtml_branch_coverage=1 00:04:08.057 --rc genhtml_function_coverage=1 00:04:08.057 --rc genhtml_legend=1 00:04:08.057 --rc geninfo_all_blocks=1 00:04:08.057 --rc geninfo_unexecuted_blocks=1 00:04:08.058 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:08.058 ' 00:04:08.058 07:00:12 setup.sh.driver -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:08.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.058 --rc genhtml_branch_coverage=1 00:04:08.058 --rc genhtml_function_coverage=1 00:04:08.058 --rc genhtml_legend=1 00:04:08.058 --rc geninfo_all_blocks=1 00:04:08.058 --rc geninfo_unexecuted_blocks=1 00:04:08.058 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:08.058 ' 00:04:08.058 07:00:12 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:08.058 07:00:12 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:08.058 07:00:12 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:13.332 07:00:17 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:13.332 07:00:17 setup.sh.driver -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:13.332 07:00:17 setup.sh.driver -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:13.332 07:00:17 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:13.332 ************************************ 00:04:13.332 START TEST guess_driver 00:04:13.332 ************************************ 00:04:13.332 07:00:17 setup.sh.driver.guess_driver -- common/autotest_common.sh@1127 -- # guess_driver 00:04:13.332 07:00:17 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:13.332 07:00:17 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:13.332 07:00:17 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:13.332 07:00:17 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:13.332 07:00:17 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:13.332 07:00:17 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:13.332 07:00:17 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:13.332 07:00:17 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:13.332 07:00:17 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:13.332 07:00:17 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:04:13.332 07:00:17 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:13.332 07:00:17 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:13.332 07:00:17 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:13.332 07:00:17 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:13.332 07:00:17 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:13.332 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:13.332 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:13.332 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:13.332 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:13.332 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:13.332 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:13.332 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:13.332 07:00:17 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:13.332 07:00:17 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:13.332 07:00:17 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:13.332 07:00:17 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:13.332 07:00:17 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:13.332 Looking for driver=vfio-pci 00:04:13.332 07:00:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.332 07:00:17 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:13.332 07:00:17 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.332 07:00:17 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:16.623 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.623 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.623 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.623 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.623 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.623 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.623 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.623 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.623 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.623 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.623 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.623 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.623 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.623 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.623 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.623 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.623 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.623 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.623 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.623 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.623 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.623 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.623 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.624 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.624 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.624 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.624 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.624 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.624 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.624 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.624 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.624 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.624 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.624 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.624 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.624 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.624 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.624 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.624 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.624 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.624 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.624 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.624 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.624 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.624 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.624 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.624 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.624 07:00:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.001 07:00:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.001 07:00:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.001 07:00:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.001 07:00:22 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:18.001 07:00:22 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:18.001 07:00:22 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:18.001 07:00:22 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:23.278 00:04:23.278 real 0m9.716s 00:04:23.278 user 0m2.467s 00:04:23.278 sys 0m4.957s 00:04:23.278 07:00:26 setup.sh.driver.guess_driver -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:23.278 07:00:26 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:23.278 ************************************ 00:04:23.278 END TEST guess_driver 00:04:23.278 ************************************ 00:04:23.278 00:04:23.278 real 0m14.821s 00:04:23.278 user 0m3.943s 00:04:23.278 sys 0m7.836s 00:04:23.278 07:00:26 setup.sh.driver -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:23.278 07:00:26 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:23.278 ************************************ 00:04:23.278 END TEST driver 00:04:23.278 ************************************ 00:04:23.278 07:00:27 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:23.278 07:00:27 setup.sh -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:23.278 07:00:27 setup.sh -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:23.278 07:00:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:23.278 ************************************ 00:04:23.278 START TEST devices 00:04:23.278 ************************************ 00:04:23.278 07:00:27 setup.sh.devices -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:23.278 * Looking for test storage... 00:04:23.278 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:23.278 07:00:27 setup.sh.devices -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:23.278 07:00:27 setup.sh.devices -- common/autotest_common.sh@1691 -- # lcov --version 00:04:23.278 07:00:27 setup.sh.devices -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:23.278 07:00:27 setup.sh.devices -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:23.278 07:00:27 setup.sh.devices -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.278 07:00:27 setup.sh.devices -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.278 07:00:27 setup.sh.devices -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.278 07:00:27 setup.sh.devices -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.278 07:00:27 setup.sh.devices -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.278 07:00:27 setup.sh.devices -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.278 07:00:27 setup.sh.devices -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.278 07:00:27 setup.sh.devices -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.278 07:00:27 setup.sh.devices -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.278 07:00:27 setup.sh.devices -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.278 07:00:27 setup.sh.devices -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.278 07:00:27 setup.sh.devices -- scripts/common.sh@344 -- # case "$op" in 00:04:23.278 07:00:27 setup.sh.devices -- scripts/common.sh@345 -- # : 1 00:04:23.278 07:00:27 setup.sh.devices -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.278 07:00:27 setup.sh.devices -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.278 07:00:27 setup.sh.devices -- scripts/common.sh@365 -- # decimal 1 00:04:23.278 07:00:27 setup.sh.devices -- scripts/common.sh@353 -- # local d=1 00:04:23.278 07:00:27 setup.sh.devices -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.278 07:00:27 setup.sh.devices -- scripts/common.sh@355 -- # echo 1 00:04:23.278 07:00:27 setup.sh.devices -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.278 07:00:27 setup.sh.devices -- scripts/common.sh@366 -- # decimal 2 00:04:23.278 07:00:27 setup.sh.devices -- scripts/common.sh@353 -- # local d=2 00:04:23.278 07:00:27 setup.sh.devices -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.278 07:00:27 setup.sh.devices -- scripts/common.sh@355 -- # echo 2 00:04:23.278 07:00:27 setup.sh.devices -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.278 07:00:27 setup.sh.devices -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.278 07:00:27 setup.sh.devices -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.278 07:00:27 setup.sh.devices -- scripts/common.sh@368 -- # return 0 00:04:23.278 07:00:27 setup.sh.devices -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.278 07:00:27 setup.sh.devices -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:23.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.278 --rc genhtml_branch_coverage=1 00:04:23.278 --rc genhtml_function_coverage=1 00:04:23.278 --rc genhtml_legend=1 00:04:23.278 --rc geninfo_all_blocks=1 00:04:23.278 --rc geninfo_unexecuted_blocks=1 00:04:23.278 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:23.278 ' 00:04:23.278 07:00:27 setup.sh.devices -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:23.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.278 --rc genhtml_branch_coverage=1 00:04:23.278 --rc genhtml_function_coverage=1 00:04:23.278 --rc genhtml_legend=1 00:04:23.278 --rc geninfo_all_blocks=1 00:04:23.278 --rc geninfo_unexecuted_blocks=1 00:04:23.278 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:23.278 ' 00:04:23.278 07:00:27 setup.sh.devices -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:23.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.278 --rc genhtml_branch_coverage=1 00:04:23.278 --rc genhtml_function_coverage=1 00:04:23.278 --rc genhtml_legend=1 00:04:23.278 --rc geninfo_all_blocks=1 00:04:23.278 --rc geninfo_unexecuted_blocks=1 00:04:23.278 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:23.278 ' 00:04:23.278 07:00:27 setup.sh.devices -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:23.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.278 --rc genhtml_branch_coverage=1 00:04:23.278 --rc genhtml_function_coverage=1 00:04:23.278 --rc genhtml_legend=1 00:04:23.278 --rc geninfo_all_blocks=1 00:04:23.278 --rc geninfo_unexecuted_blocks=1 00:04:23.278 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:23.278 ' 00:04:23.278 07:00:27 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:23.278 07:00:27 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:23.278 07:00:27 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:23.278 07:00:27 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:26.566 07:00:30 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:26.566 07:00:30 setup.sh.devices -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:26.566 07:00:30 setup.sh.devices -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:26.566 07:00:30 setup.sh.devices -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:26.566 07:00:30 setup.sh.devices -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:26.566 07:00:30 setup.sh.devices -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:26.566 07:00:30 setup.sh.devices -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:26.566 07:00:30 setup.sh.devices -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:26.566 07:00:30 setup.sh.devices -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:26.566 07:00:30 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:26.566 07:00:30 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:26.566 07:00:30 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:26.566 07:00:30 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:26.566 07:00:30 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:26.566 07:00:30 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:26.566 07:00:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:26.566 07:00:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:26.566 07:00:30 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:26.566 07:00:30 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:26.566 07:00:30 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:26.566 07:00:30 setup.sh.devices -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:04:26.566 07:00:30 setup.sh.devices -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:26.566 No valid GPT data, bailing 00:04:26.566 07:00:30 setup.sh.devices -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:26.566 07:00:30 setup.sh.devices -- scripts/common.sh@394 -- # pt= 00:04:26.566 07:00:30 setup.sh.devices -- scripts/common.sh@395 -- # return 1 00:04:26.566 07:00:30 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:26.566 07:00:30 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:26.566 07:00:30 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:26.566 07:00:30 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:04:26.566 07:00:30 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:04:26.566 07:00:30 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:26.566 07:00:30 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:26.566 07:00:30 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:26.566 07:00:30 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:26.566 07:00:30 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:26.566 07:00:30 setup.sh.devices -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:26.566 07:00:30 setup.sh.devices -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:26.566 07:00:30 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:26.566 ************************************ 00:04:26.566 START TEST nvme_mount 00:04:26.566 ************************************ 00:04:26.566 07:00:30 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1127 -- # nvme_mount 00:04:26.566 07:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:26.566 07:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:26.566 07:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.567 07:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:26.567 07:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:26.567 07:00:30 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:26.567 07:00:30 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:26.567 07:00:30 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:26.567 07:00:30 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:26.567 07:00:30 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:26.567 07:00:30 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:26.567 07:00:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:26.567 07:00:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:26.567 07:00:30 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:26.567 07:00:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:26.567 07:00:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:26.567 07:00:30 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:26.567 07:00:30 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:26.567 07:00:30 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:27.503 Creating new GPT entries in memory. 00:04:27.503 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:27.503 other utilities. 00:04:27.503 07:00:31 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:27.503 07:00:31 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:27.503 07:00:31 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:27.503 07:00:31 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:27.503 07:00:31 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:28.880 Creating new GPT entries in memory. 00:04:28.880 The operation has completed successfully. 00:04:28.880 07:00:32 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:28.880 07:00:32 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:28.880 07:00:32 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3744548 00:04:28.880 07:00:33 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.880 07:00:33 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:28.880 07:00:33 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.880 07:00:33 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:28.880 07:00:33 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:28.880 07:00:33 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.880 07:00:33 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:28.880 07:00:33 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:28.880 07:00:33 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:28.880 07:00:33 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.880 07:00:33 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:28.880 07:00:33 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:28.880 07:00:33 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:28.880 07:00:33 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:28.880 07:00:33 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:28.880 07:00:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.880 07:00:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:28.880 07:00:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:28.880 07:00:33 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.880 07:00:33 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:32.168 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:32.168 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:32.428 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:32.428 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:32.428 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:32.428 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:32.428 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:32.428 07:00:36 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:32.428 07:00:36 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.428 07:00:36 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:32.428 07:00:36 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:32.428 07:00:36 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.428 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:32.428 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:32.428 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:32.428 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.428 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:32.428 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:32.428 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:32.428 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:32.428 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:32.428 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:32.428 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.428 07:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:32.428 07:00:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.428 07:00:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:35.740 07:00:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.740 07:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:35.740 07:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:35.740 07:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.740 07:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:35.740 07:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:35.740 07:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.740 07:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:35.740 07:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:35.740 07:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:35.740 07:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:35.740 07:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:35.740 07:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:35.740 07:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:35.740 07:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:35.740 07:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.740 07:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:35.740 07:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:35.740 07:00:40 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.740 07:00:40 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:39.032 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:39.032 00:04:39.032 real 0m12.471s 00:04:39.032 user 0m3.688s 00:04:39.032 sys 0m6.724s 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:39.032 07:00:43 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:39.032 ************************************ 00:04:39.032 END TEST nvme_mount 00:04:39.032 ************************************ 00:04:39.032 07:00:43 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:39.032 07:00:43 setup.sh.devices -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:39.032 07:00:43 setup.sh.devices -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:39.032 07:00:43 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:39.032 ************************************ 00:04:39.032 START TEST dm_mount 00:04:39.032 ************************************ 00:04:39.032 07:00:43 setup.sh.devices.dm_mount -- common/autotest_common.sh@1127 -- # dm_mount 00:04:39.032 07:00:43 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:39.032 07:00:43 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:39.032 07:00:43 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:39.032 07:00:43 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:39.032 07:00:43 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:39.032 07:00:43 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:39.032 07:00:43 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:39.032 07:00:43 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:39.032 07:00:43 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:39.032 07:00:43 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:39.032 07:00:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:39.032 07:00:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:39.032 07:00:43 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:39.032 07:00:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:39.032 07:00:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:39.032 07:00:43 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:39.032 07:00:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:39.032 07:00:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:39.032 07:00:43 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:39.032 07:00:43 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:39.032 07:00:43 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:40.415 Creating new GPT entries in memory. 00:04:40.415 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:40.415 other utilities. 00:04:40.415 07:00:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:40.415 07:00:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:40.415 07:00:44 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:40.415 07:00:44 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:40.415 07:00:44 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:41.370 Creating new GPT entries in memory. 00:04:41.370 The operation has completed successfully. 00:04:41.370 07:00:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:41.370 07:00:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.370 07:00:45 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:41.370 07:00:45 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:41.370 07:00:45 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:42.022 The operation has completed successfully. 00:04:42.281 07:00:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:42.281 07:00:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:42.281 07:00:46 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3748974 00:04:42.281 07:00:46 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:42.281 07:00:46 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:42.281 07:00:46 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:42.281 07:00:46 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:42.281 07:00:46 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:42.281 07:00:46 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:42.281 07:00:46 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:42.281 07:00:46 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:42.281 07:00:46 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:42.281 07:00:46 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:42.281 07:00:46 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:42.281 07:00:46 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:42.281 07:00:46 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:42.281 07:00:46 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:42.282 07:00:46 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:04:42.282 07:00:46 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:42.282 07:00:46 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:42.282 07:00:46 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:42.282 07:00:46 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:42.282 07:00:46 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:42.282 07:00:46 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:42.282 07:00:46 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:42.282 07:00:46 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:42.282 07:00:46 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:42.282 07:00:46 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:42.282 07:00:46 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:42.282 07:00:46 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:42.282 07:00:46 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:42.282 07:00:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.282 07:00:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:42.282 07:00:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:42.282 07:00:46 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.282 07:00:46 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.570 07:00:49 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:48.861 07:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.861 07:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.861 07:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.861 07:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.861 07:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.861 07:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.861 07:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.861 07:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.861 07:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.861 07:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.861 07:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.861 07:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.861 07:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.861 07:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.861 07:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.861 07:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.861 07:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.861 07:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.861 07:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.861 07:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.861 07:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.861 07:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.861 07:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.861 07:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.861 07:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.861 07:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.861 07:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.861 07:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.861 07:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.861 07:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.861 07:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.861 07:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.861 07:00:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.861 07:00:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:48.861 07:00:53 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:48.861 07:00:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.861 07:00:53 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:48.861 07:00:53 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:48.861 07:00:53 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:48.861 07:00:53 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:48.861 07:00:53 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:48.861 07:00:53 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:48.861 07:00:53 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:48.861 07:00:53 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:48.861 07:00:53 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:48.861 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:48.861 07:00:53 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:48.861 07:00:53 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:48.861 00:04:48.861 real 0m9.784s 00:04:48.861 user 0m2.366s 00:04:48.861 sys 0m4.493s 00:04:48.861 07:00:53 setup.sh.devices.dm_mount -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:48.861 07:00:53 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:48.861 ************************************ 00:04:48.861 END TEST dm_mount 00:04:48.861 ************************************ 00:04:48.861 07:00:53 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:48.861 07:00:53 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:48.861 07:00:53 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:48.861 07:00:53 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:48.861 07:00:53 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:48.861 07:00:53 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:48.861 07:00:53 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:49.120 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:49.120 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:49.120 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:49.120 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:49.120 07:00:53 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:49.120 07:00:53 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:49.120 07:00:53 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:49.120 07:00:53 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:49.120 07:00:53 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:49.120 07:00:53 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:49.120 07:00:53 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:49.120 00:04:49.120 real 0m26.549s 00:04:49.120 user 0m7.513s 00:04:49.120 sys 0m13.963s 00:04:49.120 07:00:53 setup.sh.devices -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:49.120 07:00:53 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:49.120 ************************************ 00:04:49.120 END TEST devices 00:04:49.120 ************************************ 00:04:49.120 00:04:49.120 real 1m28.643s 00:04:49.120 user 0m27.066s 00:04:49.120 sys 0m50.553s 00:04:49.120 07:00:53 setup.sh -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:49.120 07:00:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:49.120 ************************************ 00:04:49.120 END TEST setup.sh 00:04:49.120 ************************************ 00:04:49.380 07:00:53 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:04:52.672 Hugepages 00:04:52.672 node hugesize free / total 00:04:52.672 node0 1048576kB 0 / 0 00:04:52.672 node0 2048kB 1024 / 1024 00:04:52.672 node1 1048576kB 0 / 0 00:04:52.672 node1 2048kB 1024 / 1024 00:04:52.672 00:04:52.672 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:52.672 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:52.672 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:52.672 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:52.672 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:52.672 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:52.672 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:52.672 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:52.672 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:52.672 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:52.672 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:52.672 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:52.672 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:52.672 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:52.672 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:52.672 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:52.672 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:52.672 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:52.672 07:00:57 -- spdk/autotest.sh@117 -- # uname -s 00:04:52.672 07:00:57 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:52.931 07:00:57 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:52.931 07:00:57 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:56.220 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:56.220 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:56.220 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:56.220 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:56.220 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:56.220 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:56.220 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:56.220 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:56.220 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:56.220 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:56.220 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:56.220 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:56.220 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:56.220 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:56.220 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:56.220 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:57.597 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:57.856 07:01:02 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:58.792 07:01:03 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:58.792 07:01:03 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:58.792 07:01:03 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:58.792 07:01:03 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:58.792 07:01:03 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:58.792 07:01:03 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:58.792 07:01:03 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:58.792 07:01:03 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:58.792 07:01:03 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:59.051 07:01:03 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:59.051 07:01:03 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:d8:00.0 00:04:59.051 07:01:03 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:02.338 Waiting for block devices as requested 00:05:02.338 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:02.338 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:02.338 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:02.338 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:02.338 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:02.598 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:02.598 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:02.598 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:02.857 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:02.857 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:02.857 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:03.116 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:03.117 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:03.117 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:03.375 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:03.375 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:03.375 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:03.635 07:01:08 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:03.635 07:01:08 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:03.635 07:01:08 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:05:03.635 07:01:08 -- common/autotest_common.sh@1485 -- # grep 0000:d8:00.0/nvme/nvme 00:05:03.635 07:01:08 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:03.635 07:01:08 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:03.635 07:01:08 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:03.635 07:01:08 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:03.635 07:01:08 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:03.635 07:01:08 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:03.635 07:01:08 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:03.635 07:01:08 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:03.635 07:01:08 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:03.635 07:01:08 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:05:03.635 07:01:08 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:03.635 07:01:08 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:03.635 07:01:08 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:03.635 07:01:08 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:03.635 07:01:08 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:03.635 07:01:08 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:03.635 07:01:08 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:03.635 07:01:08 -- common/autotest_common.sh@1541 -- # continue 00:05:03.635 07:01:08 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:03.635 07:01:08 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:03.635 07:01:08 -- common/autotest_common.sh@10 -- # set +x 00:05:03.635 07:01:08 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:03.635 07:01:08 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:03.635 07:01:08 -- common/autotest_common.sh@10 -- # set +x 00:05:03.635 07:01:08 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:06.925 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:06.925 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:06.925 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:06.925 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:06.925 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:06.925 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:06.925 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:06.925 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:06.925 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:06.925 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:06.925 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:06.925 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:06.925 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:06.925 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:06.925 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:06.925 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:08.304 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:08.304 07:01:12 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:08.304 07:01:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:08.304 07:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:08.304 07:01:12 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:08.304 07:01:12 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:08.304 07:01:12 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:08.304 07:01:12 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:08.304 07:01:12 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:08.304 07:01:12 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:08.304 07:01:12 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:08.304 07:01:12 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:08.304 07:01:12 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:08.304 07:01:12 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:08.304 07:01:12 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:08.304 07:01:12 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:08.304 07:01:12 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:08.304 07:01:12 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:05:08.304 07:01:12 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:d8:00.0 00:05:08.304 07:01:12 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:08.304 07:01:12 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:08.304 07:01:12 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:05:08.304 07:01:12 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:08.304 07:01:12 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:05:08.304 07:01:12 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:05:08.304 07:01:12 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:d8:00.0 00:05:08.304 07:01:12 -- common/autotest_common.sh@1577 -- # [[ -z 0000:d8:00.0 ]] 00:05:08.304 07:01:12 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=3758491 00:05:08.304 07:01:12 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:08.304 07:01:12 -- common/autotest_common.sh@1583 -- # waitforlisten 3758491 00:05:08.304 07:01:12 -- common/autotest_common.sh@833 -- # '[' -z 3758491 ']' 00:05:08.304 07:01:12 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.304 07:01:12 -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:08.304 07:01:12 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.304 07:01:12 -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:08.304 07:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:08.304 [2024-11-20 07:01:12.842557] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:05:08.304 [2024-11-20 07:01:12.842628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3758491 ] 00:05:08.563 [2024-11-20 07:01:12.914446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.563 [2024-11-20 07:01:12.957235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.823 07:01:13 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:08.823 07:01:13 -- common/autotest_common.sh@866 -- # return 0 00:05:08.823 07:01:13 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:05:08.823 07:01:13 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:05:08.823 07:01:13 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:12.110 nvme0n1 00:05:12.110 07:01:16 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:12.110 [2024-11-20 07:01:16.371825] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:12.110 request: 00:05:12.110 { 00:05:12.110 "nvme_ctrlr_name": "nvme0", 00:05:12.110 "password": "test", 00:05:12.110 "method": "bdev_nvme_opal_revert", 00:05:12.110 "req_id": 1 00:05:12.110 } 00:05:12.110 Got JSON-RPC error response 00:05:12.110 response: 00:05:12.110 { 00:05:12.110 "code": -32602, 00:05:12.110 "message": "Invalid parameters" 00:05:12.110 } 00:05:12.110 07:01:16 -- common/autotest_common.sh@1589 -- # true 00:05:12.110 07:01:16 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:05:12.110 07:01:16 -- common/autotest_common.sh@1593 -- # killprocess 3758491 00:05:12.110 07:01:16 -- common/autotest_common.sh@952 -- # '[' -z 3758491 ']' 00:05:12.110 07:01:16 -- common/autotest_common.sh@956 -- # kill -0 3758491 00:05:12.110 07:01:16 -- common/autotest_common.sh@957 -- # uname 00:05:12.110 07:01:16 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:12.110 07:01:16 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3758491 00:05:12.110 07:01:16 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:12.110 07:01:16 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:12.110 07:01:16 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3758491' 00:05:12.110 killing process with pid 3758491 00:05:12.110 07:01:16 -- common/autotest_common.sh@971 -- # kill 3758491 00:05:12.110 07:01:16 -- common/autotest_common.sh@976 -- # wait 3758491 00:05:14.014 07:01:18 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:14.014 07:01:18 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:14.014 07:01:18 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:14.014 07:01:18 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:14.014 07:01:18 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:14.014 07:01:18 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:14.014 07:01:18 -- common/autotest_common.sh@10 -- # set +x 00:05:14.014 07:01:18 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:14.014 07:01:18 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:14.014 07:01:18 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:14.014 07:01:18 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:14.014 07:01:18 -- common/autotest_common.sh@10 -- # set +x 00:05:14.272 ************************************ 00:05:14.273 START TEST env 00:05:14.273 ************************************ 00:05:14.273 07:01:18 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:14.273 * Looking for test storage... 00:05:14.273 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:05:14.273 07:01:18 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:14.273 07:01:18 env -- common/autotest_common.sh@1691 -- # lcov --version 00:05:14.273 07:01:18 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:14.273 07:01:18 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:14.273 07:01:18 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.273 07:01:18 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.273 07:01:18 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.273 07:01:18 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.273 07:01:18 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.273 07:01:18 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.273 07:01:18 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.273 07:01:18 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.273 07:01:18 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.273 07:01:18 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.273 07:01:18 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.273 07:01:18 env -- scripts/common.sh@344 -- # case "$op" in 00:05:14.273 07:01:18 env -- scripts/common.sh@345 -- # : 1 00:05:14.273 07:01:18 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.273 07:01:18 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.273 07:01:18 env -- scripts/common.sh@365 -- # decimal 1 00:05:14.273 07:01:18 env -- scripts/common.sh@353 -- # local d=1 00:05:14.273 07:01:18 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.273 07:01:18 env -- scripts/common.sh@355 -- # echo 1 00:05:14.273 07:01:18 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.273 07:01:18 env -- scripts/common.sh@366 -- # decimal 2 00:05:14.273 07:01:18 env -- scripts/common.sh@353 -- # local d=2 00:05:14.273 07:01:18 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.273 07:01:18 env -- scripts/common.sh@355 -- # echo 2 00:05:14.273 07:01:18 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.273 07:01:18 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.273 07:01:18 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.273 07:01:18 env -- scripts/common.sh@368 -- # return 0 00:05:14.273 07:01:18 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.273 07:01:18 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:14.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.273 --rc genhtml_branch_coverage=1 00:05:14.273 --rc genhtml_function_coverage=1 00:05:14.273 --rc genhtml_legend=1 00:05:14.273 --rc geninfo_all_blocks=1 00:05:14.273 --rc geninfo_unexecuted_blocks=1 00:05:14.273 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:14.273 ' 00:05:14.273 07:01:18 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:14.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.273 --rc genhtml_branch_coverage=1 00:05:14.273 --rc genhtml_function_coverage=1 00:05:14.273 --rc genhtml_legend=1 00:05:14.273 --rc geninfo_all_blocks=1 00:05:14.273 --rc geninfo_unexecuted_blocks=1 00:05:14.273 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:14.273 ' 00:05:14.273 07:01:18 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:14.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.273 --rc genhtml_branch_coverage=1 00:05:14.273 --rc genhtml_function_coverage=1 00:05:14.273 --rc genhtml_legend=1 00:05:14.273 --rc geninfo_all_blocks=1 00:05:14.273 --rc geninfo_unexecuted_blocks=1 00:05:14.273 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:14.273 ' 00:05:14.273 07:01:18 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:14.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.273 --rc genhtml_branch_coverage=1 00:05:14.273 --rc genhtml_function_coverage=1 00:05:14.273 --rc genhtml_legend=1 00:05:14.273 --rc geninfo_all_blocks=1 00:05:14.273 --rc geninfo_unexecuted_blocks=1 00:05:14.273 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:14.273 ' 00:05:14.273 07:01:18 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:14.273 07:01:18 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:14.273 07:01:18 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:14.273 07:01:18 env -- common/autotest_common.sh@10 -- # set +x 00:05:14.273 ************************************ 00:05:14.273 START TEST env_memory 00:05:14.273 ************************************ 00:05:14.273 07:01:18 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:14.273 00:05:14.273 00:05:14.273 CUnit - A unit testing framework for C - Version 2.1-3 00:05:14.273 http://cunit.sourceforge.net/ 00:05:14.273 00:05:14.273 00:05:14.273 Suite: memory 00:05:14.533 Test: alloc and free memory map ...[2024-11-20 07:01:18.845551] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:14.533 passed 00:05:14.533 Test: mem map translation ...[2024-11-20 07:01:18.858731] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:14.533 [2024-11-20 07:01:18.858748] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:14.533 [2024-11-20 07:01:18.858779] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:14.533 [2024-11-20 07:01:18.858787] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:14.533 passed 00:05:14.533 Test: mem map registration ...[2024-11-20 07:01:18.879347] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:14.533 [2024-11-20 07:01:18.879363] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:14.533 passed 00:05:14.533 Test: mem map adjacent registrations ...passed 00:05:14.533 00:05:14.533 Run Summary: Type Total Ran Passed Failed Inactive 00:05:14.533 suites 1 1 n/a 0 0 00:05:14.533 tests 4 4 4 0 0 00:05:14.533 asserts 152 152 152 0 n/a 00:05:14.533 00:05:14.533 Elapsed time = 0.086 seconds 00:05:14.533 00:05:14.533 real 0m0.100s 00:05:14.533 user 0m0.086s 00:05:14.533 sys 0m0.014s 00:05:14.533 07:01:18 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:14.533 07:01:18 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:14.533 ************************************ 00:05:14.533 END TEST env_memory 00:05:14.533 ************************************ 00:05:14.533 07:01:18 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:14.533 07:01:18 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:14.533 07:01:18 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:14.533 07:01:18 env -- common/autotest_common.sh@10 -- # set +x 00:05:14.533 ************************************ 00:05:14.533 START TEST env_vtophys 00:05:14.533 ************************************ 00:05:14.533 07:01:18 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:14.533 EAL: lib.eal log level changed from notice to debug 00:05:14.533 EAL: Detected lcore 0 as core 0 on socket 0 00:05:14.533 EAL: Detected lcore 1 as core 1 on socket 0 00:05:14.533 EAL: Detected lcore 2 as core 2 on socket 0 00:05:14.533 EAL: Detected lcore 3 as core 3 on socket 0 00:05:14.533 EAL: Detected lcore 4 as core 4 on socket 0 00:05:14.533 EAL: Detected lcore 5 as core 5 on socket 0 00:05:14.533 EAL: Detected lcore 6 as core 6 on socket 0 00:05:14.533 EAL: Detected lcore 7 as core 8 on socket 0 00:05:14.533 EAL: Detected lcore 8 as core 9 on socket 0 00:05:14.533 EAL: Detected lcore 9 as core 10 on socket 0 00:05:14.533 EAL: Detected lcore 10 as core 11 on socket 0 00:05:14.533 EAL: Detected lcore 11 as core 12 on socket 0 00:05:14.533 EAL: Detected lcore 12 as core 13 on socket 0 00:05:14.533 EAL: Detected lcore 13 as core 14 on socket 0 00:05:14.533 EAL: Detected lcore 14 as core 16 on socket 0 00:05:14.533 EAL: Detected lcore 15 as core 17 on socket 0 00:05:14.533 EAL: Detected lcore 16 as core 18 on socket 0 00:05:14.533 EAL: Detected lcore 17 as core 19 on socket 0 00:05:14.533 EAL: Detected lcore 18 as core 20 on socket 0 00:05:14.533 EAL: Detected lcore 19 as core 21 on socket 0 00:05:14.533 EAL: Detected lcore 20 as core 22 on socket 0 00:05:14.533 EAL: Detected lcore 21 as core 24 on socket 0 00:05:14.533 EAL: Detected lcore 22 as core 25 on socket 0 00:05:14.533 EAL: Detected lcore 23 as core 26 on socket 0 00:05:14.533 EAL: Detected lcore 24 as core 27 on socket 0 00:05:14.533 EAL: Detected lcore 25 as core 28 on socket 0 00:05:14.533 EAL: Detected lcore 26 as core 29 on socket 0 00:05:14.533 EAL: Detected lcore 27 as core 30 on socket 0 00:05:14.533 EAL: Detected lcore 28 as core 0 on socket 1 00:05:14.533 EAL: Detected lcore 29 as core 1 on socket 1 00:05:14.533 EAL: Detected lcore 30 as core 2 on socket 1 00:05:14.534 EAL: Detected lcore 31 as core 3 on socket 1 00:05:14.534 EAL: Detected lcore 32 as core 4 on socket 1 00:05:14.534 EAL: Detected lcore 33 as core 5 on socket 1 00:05:14.534 EAL: Detected lcore 34 as core 6 on socket 1 00:05:14.534 EAL: Detected lcore 35 as core 8 on socket 1 00:05:14.534 EAL: Detected lcore 36 as core 9 on socket 1 00:05:14.534 EAL: Detected lcore 37 as core 10 on socket 1 00:05:14.534 EAL: Detected lcore 38 as core 11 on socket 1 00:05:14.534 EAL: Detected lcore 39 as core 12 on socket 1 00:05:14.534 EAL: Detected lcore 40 as core 13 on socket 1 00:05:14.534 EAL: Detected lcore 41 as core 14 on socket 1 00:05:14.534 EAL: Detected lcore 42 as core 16 on socket 1 00:05:14.534 EAL: Detected lcore 43 as core 17 on socket 1 00:05:14.534 EAL: Detected lcore 44 as core 18 on socket 1 00:05:14.534 EAL: Detected lcore 45 as core 19 on socket 1 00:05:14.534 EAL: Detected lcore 46 as core 20 on socket 1 00:05:14.534 EAL: Detected lcore 47 as core 21 on socket 1 00:05:14.534 EAL: Detected lcore 48 as core 22 on socket 1 00:05:14.534 EAL: Detected lcore 49 as core 24 on socket 1 00:05:14.534 EAL: Detected lcore 50 as core 25 on socket 1 00:05:14.534 EAL: Detected lcore 51 as core 26 on socket 1 00:05:14.534 EAL: Detected lcore 52 as core 27 on socket 1 00:05:14.534 EAL: Detected lcore 53 as core 28 on socket 1 00:05:14.534 EAL: Detected lcore 54 as core 29 on socket 1 00:05:14.534 EAL: Detected lcore 55 as core 30 on socket 1 00:05:14.534 EAL: Detected lcore 56 as core 0 on socket 0 00:05:14.534 EAL: Detected lcore 57 as core 1 on socket 0 00:05:14.534 EAL: Detected lcore 58 as core 2 on socket 0 00:05:14.534 EAL: Detected lcore 59 as core 3 on socket 0 00:05:14.534 EAL: Detected lcore 60 as core 4 on socket 0 00:05:14.534 EAL: Detected lcore 61 as core 5 on socket 0 00:05:14.534 EAL: Detected lcore 62 as core 6 on socket 0 00:05:14.534 EAL: Detected lcore 63 as core 8 on socket 0 00:05:14.534 EAL: Detected lcore 64 as core 9 on socket 0 00:05:14.534 EAL: Detected lcore 65 as core 10 on socket 0 00:05:14.534 EAL: Detected lcore 66 as core 11 on socket 0 00:05:14.534 EAL: Detected lcore 67 as core 12 on socket 0 00:05:14.534 EAL: Detected lcore 68 as core 13 on socket 0 00:05:14.534 EAL: Detected lcore 69 as core 14 on socket 0 00:05:14.534 EAL: Detected lcore 70 as core 16 on socket 0 00:05:14.534 EAL: Detected lcore 71 as core 17 on socket 0 00:05:14.534 EAL: Detected lcore 72 as core 18 on socket 0 00:05:14.534 EAL: Detected lcore 73 as core 19 on socket 0 00:05:14.534 EAL: Detected lcore 74 as core 20 on socket 0 00:05:14.534 EAL: Detected lcore 75 as core 21 on socket 0 00:05:14.534 EAL: Detected lcore 76 as core 22 on socket 0 00:05:14.534 EAL: Detected lcore 77 as core 24 on socket 0 00:05:14.534 EAL: Detected lcore 78 as core 25 on socket 0 00:05:14.534 EAL: Detected lcore 79 as core 26 on socket 0 00:05:14.534 EAL: Detected lcore 80 as core 27 on socket 0 00:05:14.534 EAL: Detected lcore 81 as core 28 on socket 0 00:05:14.534 EAL: Detected lcore 82 as core 29 on socket 0 00:05:14.534 EAL: Detected lcore 83 as core 30 on socket 0 00:05:14.534 EAL: Detected lcore 84 as core 0 on socket 1 00:05:14.534 EAL: Detected lcore 85 as core 1 on socket 1 00:05:14.534 EAL: Detected lcore 86 as core 2 on socket 1 00:05:14.534 EAL: Detected lcore 87 as core 3 on socket 1 00:05:14.534 EAL: Detected lcore 88 as core 4 on socket 1 00:05:14.534 EAL: Detected lcore 89 as core 5 on socket 1 00:05:14.534 EAL: Detected lcore 90 as core 6 on socket 1 00:05:14.534 EAL: Detected lcore 91 as core 8 on socket 1 00:05:14.534 EAL: Detected lcore 92 as core 9 on socket 1 00:05:14.534 EAL: Detected lcore 93 as core 10 on socket 1 00:05:14.534 EAL: Detected lcore 94 as core 11 on socket 1 00:05:14.534 EAL: Detected lcore 95 as core 12 on socket 1 00:05:14.534 EAL: Detected lcore 96 as core 13 on socket 1 00:05:14.534 EAL: Detected lcore 97 as core 14 on socket 1 00:05:14.534 EAL: Detected lcore 98 as core 16 on socket 1 00:05:14.534 EAL: Detected lcore 99 as core 17 on socket 1 00:05:14.534 EAL: Detected lcore 100 as core 18 on socket 1 00:05:14.534 EAL: Detected lcore 101 as core 19 on socket 1 00:05:14.534 EAL: Detected lcore 102 as core 20 on socket 1 00:05:14.534 EAL: Detected lcore 103 as core 21 on socket 1 00:05:14.534 EAL: Detected lcore 104 as core 22 on socket 1 00:05:14.534 EAL: Detected lcore 105 as core 24 on socket 1 00:05:14.534 EAL: Detected lcore 106 as core 25 on socket 1 00:05:14.534 EAL: Detected lcore 107 as core 26 on socket 1 00:05:14.534 EAL: Detected lcore 108 as core 27 on socket 1 00:05:14.534 EAL: Detected lcore 109 as core 28 on socket 1 00:05:14.534 EAL: Detected lcore 110 as core 29 on socket 1 00:05:14.534 EAL: Detected lcore 111 as core 30 on socket 1 00:05:14.534 EAL: Maximum logical cores by configuration: 128 00:05:14.534 EAL: Detected CPU lcores: 112 00:05:14.534 EAL: Detected NUMA nodes: 2 00:05:14.534 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:14.534 EAL: Checking presence of .so 'librte_eal.so.24' 00:05:14.534 EAL: Checking presence of .so 'librte_eal.so' 00:05:14.534 EAL: Detected static linkage of DPDK 00:05:14.534 EAL: No shared files mode enabled, IPC will be disabled 00:05:14.534 EAL: Bus pci wants IOVA as 'DC' 00:05:14.534 EAL: Buses did not request a specific IOVA mode. 00:05:14.534 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:14.534 EAL: Selected IOVA mode 'VA' 00:05:14.534 EAL: Probing VFIO support... 00:05:14.534 EAL: IOMMU type 1 (Type 1) is supported 00:05:14.534 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:14.534 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:14.534 EAL: VFIO support initialized 00:05:14.534 EAL: Ask a virtual area of 0x2e000 bytes 00:05:14.534 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:14.534 EAL: Setting up physically contiguous memory... 00:05:14.534 EAL: Setting maximum number of open files to 524288 00:05:14.534 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:14.534 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:14.534 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:14.534 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.534 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:14.534 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.534 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.534 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:14.534 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:14.534 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.534 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:14.534 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.534 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.534 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:14.534 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:14.534 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.534 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:14.534 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.534 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.534 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:14.534 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:14.534 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.534 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:14.534 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.534 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.534 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:14.534 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:14.534 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:14.534 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.534 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:14.534 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:14.534 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.534 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:14.534 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:14.534 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.534 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:14.534 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:14.534 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.534 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:14.534 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:14.534 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.534 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:14.534 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:14.534 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.534 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:14.534 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:14.534 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.534 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:14.534 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:14.534 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.534 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:14.534 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:14.535 EAL: Hugepages will be freed exactly as allocated. 00:05:14.535 EAL: No shared files mode enabled, IPC is disabled 00:05:14.535 EAL: No shared files mode enabled, IPC is disabled 00:05:14.535 EAL: TSC frequency is ~2500000 KHz 00:05:14.535 EAL: Main lcore 0 is ready (tid=7fcadddd7a00;cpuset=[0]) 00:05:14.535 EAL: Trying to obtain current memory policy. 00:05:14.535 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.535 EAL: Restoring previous memory policy: 0 00:05:14.535 EAL: request: mp_malloc_sync 00:05:14.535 EAL: No shared files mode enabled, IPC is disabled 00:05:14.535 EAL: Heap on socket 0 was expanded by 2MB 00:05:14.535 EAL: No shared files mode enabled, IPC is disabled 00:05:14.535 EAL: Mem event callback 'spdk:(nil)' registered 00:05:14.535 00:05:14.535 00:05:14.535 CUnit - A unit testing framework for C - Version 2.1-3 00:05:14.535 http://cunit.sourceforge.net/ 00:05:14.535 00:05:14.535 00:05:14.535 Suite: components_suite 00:05:14.535 Test: vtophys_malloc_test ...passed 00:05:14.535 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:14.535 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.535 EAL: Restoring previous memory policy: 4 00:05:14.535 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.535 EAL: request: mp_malloc_sync 00:05:14.535 EAL: No shared files mode enabled, IPC is disabled 00:05:14.535 EAL: Heap on socket 0 was expanded by 4MB 00:05:14.535 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.535 EAL: request: mp_malloc_sync 00:05:14.535 EAL: No shared files mode enabled, IPC is disabled 00:05:14.535 EAL: Heap on socket 0 was shrunk by 4MB 00:05:14.535 EAL: Trying to obtain current memory policy. 00:05:14.535 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.535 EAL: Restoring previous memory policy: 4 00:05:14.535 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.535 EAL: request: mp_malloc_sync 00:05:14.535 EAL: No shared files mode enabled, IPC is disabled 00:05:14.535 EAL: Heap on socket 0 was expanded by 6MB 00:05:14.535 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.535 EAL: request: mp_malloc_sync 00:05:14.535 EAL: No shared files mode enabled, IPC is disabled 00:05:14.535 EAL: Heap on socket 0 was shrunk by 6MB 00:05:14.535 EAL: Trying to obtain current memory policy. 00:05:14.535 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.535 EAL: Restoring previous memory policy: 4 00:05:14.535 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.535 EAL: request: mp_malloc_sync 00:05:14.535 EAL: No shared files mode enabled, IPC is disabled 00:05:14.535 EAL: Heap on socket 0 was expanded by 10MB 00:05:14.535 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.535 EAL: request: mp_malloc_sync 00:05:14.535 EAL: No shared files mode enabled, IPC is disabled 00:05:14.535 EAL: Heap on socket 0 was shrunk by 10MB 00:05:14.535 EAL: Trying to obtain current memory policy. 00:05:14.535 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.535 EAL: Restoring previous memory policy: 4 00:05:14.535 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.535 EAL: request: mp_malloc_sync 00:05:14.535 EAL: No shared files mode enabled, IPC is disabled 00:05:14.535 EAL: Heap on socket 0 was expanded by 18MB 00:05:14.535 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.535 EAL: request: mp_malloc_sync 00:05:14.535 EAL: No shared files mode enabled, IPC is disabled 00:05:14.535 EAL: Heap on socket 0 was shrunk by 18MB 00:05:14.535 EAL: Trying to obtain current memory policy. 00:05:14.535 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.535 EAL: Restoring previous memory policy: 4 00:05:14.535 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.535 EAL: request: mp_malloc_sync 00:05:14.535 EAL: No shared files mode enabled, IPC is disabled 00:05:14.535 EAL: Heap on socket 0 was expanded by 34MB 00:05:14.535 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.535 EAL: request: mp_malloc_sync 00:05:14.535 EAL: No shared files mode enabled, IPC is disabled 00:05:14.535 EAL: Heap on socket 0 was shrunk by 34MB 00:05:14.535 EAL: Trying to obtain current memory policy. 00:05:14.535 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.797 EAL: Restoring previous memory policy: 4 00:05:14.797 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.797 EAL: request: mp_malloc_sync 00:05:14.797 EAL: No shared files mode enabled, IPC is disabled 00:05:14.797 EAL: Heap on socket 0 was expanded by 66MB 00:05:14.797 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.797 EAL: request: mp_malloc_sync 00:05:14.797 EAL: No shared files mode enabled, IPC is disabled 00:05:14.797 EAL: Heap on socket 0 was shrunk by 66MB 00:05:14.797 EAL: Trying to obtain current memory policy. 00:05:14.797 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.797 EAL: Restoring previous memory policy: 4 00:05:14.797 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.797 EAL: request: mp_malloc_sync 00:05:14.797 EAL: No shared files mode enabled, IPC is disabled 00:05:14.797 EAL: Heap on socket 0 was expanded by 130MB 00:05:14.797 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.797 EAL: request: mp_malloc_sync 00:05:14.797 EAL: No shared files mode enabled, IPC is disabled 00:05:14.797 EAL: Heap on socket 0 was shrunk by 130MB 00:05:14.797 EAL: Trying to obtain current memory policy. 00:05:14.797 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.797 EAL: Restoring previous memory policy: 4 00:05:14.797 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.797 EAL: request: mp_malloc_sync 00:05:14.797 EAL: No shared files mode enabled, IPC is disabled 00:05:14.797 EAL: Heap on socket 0 was expanded by 258MB 00:05:14.797 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.797 EAL: request: mp_malloc_sync 00:05:14.797 EAL: No shared files mode enabled, IPC is disabled 00:05:14.797 EAL: Heap on socket 0 was shrunk by 258MB 00:05:14.797 EAL: Trying to obtain current memory policy. 00:05:14.797 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.056 EAL: Restoring previous memory policy: 4 00:05:15.056 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.056 EAL: request: mp_malloc_sync 00:05:15.056 EAL: No shared files mode enabled, IPC is disabled 00:05:15.056 EAL: Heap on socket 0 was expanded by 514MB 00:05:15.056 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.056 EAL: request: mp_malloc_sync 00:05:15.056 EAL: No shared files mode enabled, IPC is disabled 00:05:15.056 EAL: Heap on socket 0 was shrunk by 514MB 00:05:15.056 EAL: Trying to obtain current memory policy. 00:05:15.056 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.315 EAL: Restoring previous memory policy: 4 00:05:15.315 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.315 EAL: request: mp_malloc_sync 00:05:15.315 EAL: No shared files mode enabled, IPC is disabled 00:05:15.315 EAL: Heap on socket 0 was expanded by 1026MB 00:05:15.573 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.573 EAL: request: mp_malloc_sync 00:05:15.573 EAL: No shared files mode enabled, IPC is disabled 00:05:15.573 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:15.573 passed 00:05:15.573 00:05:15.573 Run Summary: Type Total Ran Passed Failed Inactive 00:05:15.573 suites 1 1 n/a 0 0 00:05:15.573 tests 2 2 2 0 0 00:05:15.573 asserts 497 497 497 0 n/a 00:05:15.573 00:05:15.573 Elapsed time = 0.961 seconds 00:05:15.573 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.573 EAL: request: mp_malloc_sync 00:05:15.573 EAL: No shared files mode enabled, IPC is disabled 00:05:15.573 EAL: Heap on socket 0 was shrunk by 2MB 00:05:15.573 EAL: No shared files mode enabled, IPC is disabled 00:05:15.573 EAL: No shared files mode enabled, IPC is disabled 00:05:15.573 EAL: No shared files mode enabled, IPC is disabled 00:05:15.573 00:05:15.573 real 0m1.072s 00:05:15.573 user 0m0.628s 00:05:15.573 sys 0m0.411s 00:05:15.573 07:01:20 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:15.574 07:01:20 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:15.574 ************************************ 00:05:15.574 END TEST env_vtophys 00:05:15.574 ************************************ 00:05:15.574 07:01:20 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:15.574 07:01:20 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:15.574 07:01:20 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:15.574 07:01:20 env -- common/autotest_common.sh@10 -- # set +x 00:05:15.832 ************************************ 00:05:15.832 START TEST env_pci 00:05:15.832 ************************************ 00:05:15.832 07:01:20 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:15.832 00:05:15.832 00:05:15.832 CUnit - A unit testing framework for C - Version 2.1-3 00:05:15.832 http://cunit.sourceforge.net/ 00:05:15.832 00:05:15.832 00:05:15.832 Suite: pci 00:05:15.832 Test: pci_hook ...[2024-11-20 07:01:20.161441] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1118:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3759799 has claimed it 00:05:15.832 EAL: Cannot find device (10000:00:01.0) 00:05:15.832 EAL: Failed to attach device on primary process 00:05:15.832 passed 00:05:15.832 00:05:15.832 Run Summary: Type Total Ran Passed Failed Inactive 00:05:15.832 suites 1 1 n/a 0 0 00:05:15.832 tests 1 1 1 0 0 00:05:15.832 asserts 25 25 25 0 n/a 00:05:15.832 00:05:15.832 Elapsed time = 0.033 seconds 00:05:15.832 00:05:15.832 real 0m0.053s 00:05:15.832 user 0m0.013s 00:05:15.832 sys 0m0.040s 00:05:15.832 07:01:20 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:15.832 07:01:20 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:15.832 ************************************ 00:05:15.832 END TEST env_pci 00:05:15.832 ************************************ 00:05:15.832 07:01:20 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:15.832 07:01:20 env -- env/env.sh@15 -- # uname 00:05:15.832 07:01:20 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:15.832 07:01:20 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:15.832 07:01:20 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:15.832 07:01:20 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:05:15.832 07:01:20 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:15.832 07:01:20 env -- common/autotest_common.sh@10 -- # set +x 00:05:15.832 ************************************ 00:05:15.832 START TEST env_dpdk_post_init 00:05:15.832 ************************************ 00:05:15.832 07:01:20 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:15.832 EAL: Detected CPU lcores: 112 00:05:15.832 EAL: Detected NUMA nodes: 2 00:05:15.832 EAL: Detected static linkage of DPDK 00:05:15.832 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:15.832 EAL: Selected IOVA mode 'VA' 00:05:15.832 EAL: VFIO support initialized 00:05:15.832 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:16.090 EAL: Using IOMMU type 1 (Type 1) 00:05:16.657 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:20.846 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:20.846 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001000000 00:05:20.846 Starting DPDK initialization... 00:05:20.846 Starting SPDK post initialization... 00:05:20.846 SPDK NVMe probe 00:05:20.846 Attaching to 0000:d8:00.0 00:05:20.846 Attached to 0000:d8:00.0 00:05:20.846 Cleaning up... 00:05:20.846 00:05:20.846 real 0m4.711s 00:05:20.846 user 0m3.311s 00:05:20.846 sys 0m0.642s 00:05:20.846 07:01:24 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:20.846 07:01:24 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:20.846 ************************************ 00:05:20.846 END TEST env_dpdk_post_init 00:05:20.846 ************************************ 00:05:20.846 07:01:25 env -- env/env.sh@26 -- # uname 00:05:20.846 07:01:25 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:20.846 07:01:25 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:20.846 07:01:25 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:20.846 07:01:25 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:20.846 07:01:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:20.846 ************************************ 00:05:20.846 START TEST env_mem_callbacks 00:05:20.846 ************************************ 00:05:20.846 07:01:25 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:20.846 EAL: Detected CPU lcores: 112 00:05:20.846 EAL: Detected NUMA nodes: 2 00:05:20.846 EAL: Detected static linkage of DPDK 00:05:20.846 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:20.846 EAL: Selected IOVA mode 'VA' 00:05:20.847 EAL: VFIO support initialized 00:05:20.847 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:20.847 00:05:20.847 00:05:20.847 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.847 http://cunit.sourceforge.net/ 00:05:20.847 00:05:20.847 00:05:20.847 Suite: memory 00:05:20.847 Test: test ... 00:05:20.847 register 0x200000200000 2097152 00:05:20.847 malloc 3145728 00:05:20.847 register 0x200000400000 4194304 00:05:20.847 buf 0x200000500000 len 3145728 PASSED 00:05:20.847 malloc 64 00:05:20.847 buf 0x2000004fff40 len 64 PASSED 00:05:20.847 malloc 4194304 00:05:20.847 register 0x200000800000 6291456 00:05:20.847 buf 0x200000a00000 len 4194304 PASSED 00:05:20.847 free 0x200000500000 3145728 00:05:20.847 free 0x2000004fff40 64 00:05:20.847 unregister 0x200000400000 4194304 PASSED 00:05:20.847 free 0x200000a00000 4194304 00:05:20.847 unregister 0x200000800000 6291456 PASSED 00:05:20.847 malloc 8388608 00:05:20.847 register 0x200000400000 10485760 00:05:20.847 buf 0x200000600000 len 8388608 PASSED 00:05:20.847 free 0x200000600000 8388608 00:05:20.847 unregister 0x200000400000 10485760 PASSED 00:05:20.847 passed 00:05:20.847 00:05:20.847 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.847 suites 1 1 n/a 0 0 00:05:20.847 tests 1 1 1 0 0 00:05:20.847 asserts 15 15 15 0 n/a 00:05:20.847 00:05:20.847 Elapsed time = 0.006 seconds 00:05:20.847 00:05:20.847 real 0m0.065s 00:05:20.847 user 0m0.016s 00:05:20.847 sys 0m0.049s 00:05:20.847 07:01:25 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:20.847 07:01:25 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:20.847 ************************************ 00:05:20.847 END TEST env_mem_callbacks 00:05:20.847 ************************************ 00:05:20.847 00:05:20.847 real 0m6.612s 00:05:20.847 user 0m4.303s 00:05:20.847 sys 0m1.561s 00:05:20.847 07:01:25 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:20.847 07:01:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:20.847 ************************************ 00:05:20.847 END TEST env 00:05:20.847 ************************************ 00:05:20.847 07:01:25 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:20.847 07:01:25 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:20.847 07:01:25 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:20.847 07:01:25 -- common/autotest_common.sh@10 -- # set +x 00:05:20.847 ************************************ 00:05:20.847 START TEST rpc 00:05:20.847 ************************************ 00:05:20.847 07:01:25 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:20.847 * Looking for test storage... 00:05:20.847 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:20.847 07:01:25 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:20.847 07:01:25 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:20.847 07:01:25 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:21.106 07:01:25 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:21.106 07:01:25 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.106 07:01:25 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.106 07:01:25 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.106 07:01:25 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.106 07:01:25 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.106 07:01:25 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.106 07:01:25 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.106 07:01:25 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.106 07:01:25 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.106 07:01:25 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.106 07:01:25 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.106 07:01:25 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:21.106 07:01:25 rpc -- scripts/common.sh@345 -- # : 1 00:05:21.106 07:01:25 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.106 07:01:25 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.106 07:01:25 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:21.106 07:01:25 rpc -- scripts/common.sh@353 -- # local d=1 00:05:21.106 07:01:25 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.106 07:01:25 rpc -- scripts/common.sh@355 -- # echo 1 00:05:21.106 07:01:25 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.106 07:01:25 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:21.106 07:01:25 rpc -- scripts/common.sh@353 -- # local d=2 00:05:21.106 07:01:25 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.106 07:01:25 rpc -- scripts/common.sh@355 -- # echo 2 00:05:21.106 07:01:25 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.106 07:01:25 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.106 07:01:25 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.106 07:01:25 rpc -- scripts/common.sh@368 -- # return 0 00:05:21.106 07:01:25 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.106 07:01:25 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:21.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.107 --rc genhtml_branch_coverage=1 00:05:21.107 --rc genhtml_function_coverage=1 00:05:21.107 --rc genhtml_legend=1 00:05:21.107 --rc geninfo_all_blocks=1 00:05:21.107 --rc geninfo_unexecuted_blocks=1 00:05:21.107 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:21.107 ' 00:05:21.107 07:01:25 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:21.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.107 --rc genhtml_branch_coverage=1 00:05:21.107 --rc genhtml_function_coverage=1 00:05:21.107 --rc genhtml_legend=1 00:05:21.107 --rc geninfo_all_blocks=1 00:05:21.107 --rc geninfo_unexecuted_blocks=1 00:05:21.107 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:21.107 ' 00:05:21.107 07:01:25 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:21.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.107 --rc genhtml_branch_coverage=1 00:05:21.107 --rc genhtml_function_coverage=1 00:05:21.107 --rc genhtml_legend=1 00:05:21.107 --rc geninfo_all_blocks=1 00:05:21.107 --rc geninfo_unexecuted_blocks=1 00:05:21.107 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:21.107 ' 00:05:21.107 07:01:25 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:21.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.107 --rc genhtml_branch_coverage=1 00:05:21.107 --rc genhtml_function_coverage=1 00:05:21.107 --rc genhtml_legend=1 00:05:21.107 --rc geninfo_all_blocks=1 00:05:21.107 --rc geninfo_unexecuted_blocks=1 00:05:21.107 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:21.107 ' 00:05:21.107 07:01:25 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3760972 00:05:21.107 07:01:25 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:21.107 07:01:25 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:21.107 07:01:25 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3760972 00:05:21.107 07:01:25 rpc -- common/autotest_common.sh@833 -- # '[' -z 3760972 ']' 00:05:21.107 07:01:25 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.107 07:01:25 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:21.107 07:01:25 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.107 07:01:25 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:21.107 07:01:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.107 [2024-11-20 07:01:25.483345] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:05:21.107 [2024-11-20 07:01:25.483412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3760972 ] 00:05:21.107 [2024-11-20 07:01:25.553544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.107 [2024-11-20 07:01:25.592329] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:21.107 [2024-11-20 07:01:25.592370] app.c: 616:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3760972' to capture a snapshot of events at runtime. 00:05:21.107 [2024-11-20 07:01:25.592380] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:21.107 [2024-11-20 07:01:25.592389] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:21.107 [2024-11-20 07:01:25.592395] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3760972 for offline analysis/debug. 00:05:21.107 [2024-11-20 07:01:25.593027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.367 07:01:25 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:21.367 07:01:25 rpc -- common/autotest_common.sh@866 -- # return 0 00:05:21.367 07:01:25 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:21.367 07:01:25 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:21.367 07:01:25 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:21.367 07:01:25 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:21.367 07:01:25 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:21.367 07:01:25 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:21.367 07:01:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.367 ************************************ 00:05:21.367 START TEST rpc_integrity 00:05:21.367 ************************************ 00:05:21.367 07:01:25 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:21.367 07:01:25 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:21.367 07:01:25 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.367 07:01:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.367 07:01:25 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.367 07:01:25 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:21.367 07:01:25 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:21.367 07:01:25 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:21.367 07:01:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:21.367 07:01:25 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.367 07:01:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.367 07:01:25 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.367 07:01:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:21.367 07:01:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:21.367 07:01:25 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.367 07:01:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.367 07:01:25 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.367 07:01:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:21.367 { 00:05:21.367 "name": "Malloc0", 00:05:21.367 "aliases": [ 00:05:21.367 "55886f19-d9e2-4b2d-9cf1-cdd14da92863" 00:05:21.367 ], 00:05:21.367 "product_name": "Malloc disk", 00:05:21.367 "block_size": 512, 00:05:21.367 "num_blocks": 16384, 00:05:21.367 "uuid": "55886f19-d9e2-4b2d-9cf1-cdd14da92863", 00:05:21.367 "assigned_rate_limits": { 00:05:21.367 "rw_ios_per_sec": 0, 00:05:21.367 "rw_mbytes_per_sec": 0, 00:05:21.367 "r_mbytes_per_sec": 0, 00:05:21.367 "w_mbytes_per_sec": 0 00:05:21.367 }, 00:05:21.367 "claimed": false, 00:05:21.367 "zoned": false, 00:05:21.367 "supported_io_types": { 00:05:21.367 "read": true, 00:05:21.367 "write": true, 00:05:21.367 "unmap": true, 00:05:21.367 "flush": true, 00:05:21.367 "reset": true, 00:05:21.367 "nvme_admin": false, 00:05:21.367 "nvme_io": false, 00:05:21.367 "nvme_io_md": false, 00:05:21.367 "write_zeroes": true, 00:05:21.367 "zcopy": true, 00:05:21.367 "get_zone_info": false, 00:05:21.367 "zone_management": false, 00:05:21.367 "zone_append": false, 00:05:21.367 "compare": false, 00:05:21.367 "compare_and_write": false, 00:05:21.367 "abort": true, 00:05:21.367 "seek_hole": false, 00:05:21.367 "seek_data": false, 00:05:21.367 "copy": true, 00:05:21.367 "nvme_iov_md": false 00:05:21.367 }, 00:05:21.367 "memory_domains": [ 00:05:21.367 { 00:05:21.367 "dma_device_id": "system", 00:05:21.367 "dma_device_type": 1 00:05:21.367 }, 00:05:21.367 { 00:05:21.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.367 "dma_device_type": 2 00:05:21.367 } 00:05:21.367 ], 00:05:21.367 "driver_specific": {} 00:05:21.367 } 00:05:21.367 ]' 00:05:21.367 07:01:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:21.626 07:01:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:21.626 07:01:25 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:21.626 07:01:25 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.626 07:01:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.626 [2024-11-20 07:01:25.942732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:21.626 [2024-11-20 07:01:25.942767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:21.626 [2024-11-20 07:01:25.942785] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x62f5280 00:05:21.626 [2024-11-20 07:01:25.942794] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:21.626 [2024-11-20 07:01:25.943719] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:21.626 [2024-11-20 07:01:25.943743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:21.626 Passthru0 00:05:21.626 07:01:25 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.626 07:01:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:21.626 07:01:25 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.626 07:01:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.626 07:01:25 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.626 07:01:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:21.626 { 00:05:21.626 "name": "Malloc0", 00:05:21.626 "aliases": [ 00:05:21.626 "55886f19-d9e2-4b2d-9cf1-cdd14da92863" 00:05:21.626 ], 00:05:21.626 "product_name": "Malloc disk", 00:05:21.626 "block_size": 512, 00:05:21.626 "num_blocks": 16384, 00:05:21.626 "uuid": "55886f19-d9e2-4b2d-9cf1-cdd14da92863", 00:05:21.626 "assigned_rate_limits": { 00:05:21.626 "rw_ios_per_sec": 0, 00:05:21.626 "rw_mbytes_per_sec": 0, 00:05:21.626 "r_mbytes_per_sec": 0, 00:05:21.626 "w_mbytes_per_sec": 0 00:05:21.626 }, 00:05:21.626 "claimed": true, 00:05:21.626 "claim_type": "exclusive_write", 00:05:21.626 "zoned": false, 00:05:21.626 "supported_io_types": { 00:05:21.626 "read": true, 00:05:21.626 "write": true, 00:05:21.626 "unmap": true, 00:05:21.626 "flush": true, 00:05:21.626 "reset": true, 00:05:21.626 "nvme_admin": false, 00:05:21.626 "nvme_io": false, 00:05:21.626 "nvme_io_md": false, 00:05:21.626 "write_zeroes": true, 00:05:21.626 "zcopy": true, 00:05:21.626 "get_zone_info": false, 00:05:21.626 "zone_management": false, 00:05:21.626 "zone_append": false, 00:05:21.626 "compare": false, 00:05:21.626 "compare_and_write": false, 00:05:21.626 "abort": true, 00:05:21.626 "seek_hole": false, 00:05:21.626 "seek_data": false, 00:05:21.626 "copy": true, 00:05:21.626 "nvme_iov_md": false 00:05:21.626 }, 00:05:21.626 "memory_domains": [ 00:05:21.626 { 00:05:21.626 "dma_device_id": "system", 00:05:21.626 "dma_device_type": 1 00:05:21.626 }, 00:05:21.626 { 00:05:21.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.626 "dma_device_type": 2 00:05:21.626 } 00:05:21.626 ], 00:05:21.626 "driver_specific": {} 00:05:21.626 }, 00:05:21.626 { 00:05:21.626 "name": "Passthru0", 00:05:21.626 "aliases": [ 00:05:21.626 "02c6e74c-ef32-5fdd-a9b2-45ca5c018d69" 00:05:21.626 ], 00:05:21.626 "product_name": "passthru", 00:05:21.626 "block_size": 512, 00:05:21.626 "num_blocks": 16384, 00:05:21.627 "uuid": "02c6e74c-ef32-5fdd-a9b2-45ca5c018d69", 00:05:21.627 "assigned_rate_limits": { 00:05:21.627 "rw_ios_per_sec": 0, 00:05:21.627 "rw_mbytes_per_sec": 0, 00:05:21.627 "r_mbytes_per_sec": 0, 00:05:21.627 "w_mbytes_per_sec": 0 00:05:21.627 }, 00:05:21.627 "claimed": false, 00:05:21.627 "zoned": false, 00:05:21.627 "supported_io_types": { 00:05:21.627 "read": true, 00:05:21.627 "write": true, 00:05:21.627 "unmap": true, 00:05:21.627 "flush": true, 00:05:21.627 "reset": true, 00:05:21.627 "nvme_admin": false, 00:05:21.627 "nvme_io": false, 00:05:21.627 "nvme_io_md": false, 00:05:21.627 "write_zeroes": true, 00:05:21.627 "zcopy": true, 00:05:21.627 "get_zone_info": false, 00:05:21.627 "zone_management": false, 00:05:21.627 "zone_append": false, 00:05:21.627 "compare": false, 00:05:21.627 "compare_and_write": false, 00:05:21.627 "abort": true, 00:05:21.627 "seek_hole": false, 00:05:21.627 "seek_data": false, 00:05:21.627 "copy": true, 00:05:21.627 "nvme_iov_md": false 00:05:21.627 }, 00:05:21.627 "memory_domains": [ 00:05:21.627 { 00:05:21.627 "dma_device_id": "system", 00:05:21.627 "dma_device_type": 1 00:05:21.627 }, 00:05:21.627 { 00:05:21.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.627 "dma_device_type": 2 00:05:21.627 } 00:05:21.627 ], 00:05:21.627 "driver_specific": { 00:05:21.627 "passthru": { 00:05:21.627 "name": "Passthru0", 00:05:21.627 "base_bdev_name": "Malloc0" 00:05:21.627 } 00:05:21.627 } 00:05:21.627 } 00:05:21.627 ]' 00:05:21.627 07:01:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:21.627 07:01:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:21.627 07:01:26 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:21.627 07:01:26 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.627 07:01:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.627 07:01:26 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.627 07:01:26 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:21.627 07:01:26 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.627 07:01:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.627 07:01:26 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.627 07:01:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:21.627 07:01:26 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.627 07:01:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.627 07:01:26 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.627 07:01:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:21.627 07:01:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:21.627 07:01:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:21.627 00:05:21.627 real 0m0.242s 00:05:21.627 user 0m0.156s 00:05:21.627 sys 0m0.031s 00:05:21.627 07:01:26 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:21.627 07:01:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.627 ************************************ 00:05:21.627 END TEST rpc_integrity 00:05:21.627 ************************************ 00:05:21.627 07:01:26 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:21.627 07:01:26 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:21.627 07:01:26 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:21.627 07:01:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.627 ************************************ 00:05:21.627 START TEST rpc_plugins 00:05:21.627 ************************************ 00:05:21.627 07:01:26 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:05:21.627 07:01:26 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:21.627 07:01:26 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.627 07:01:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:21.627 07:01:26 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.627 07:01:26 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:21.627 07:01:26 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:21.627 07:01:26 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.627 07:01:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:21.886 07:01:26 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.886 07:01:26 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:21.886 { 00:05:21.886 "name": "Malloc1", 00:05:21.886 "aliases": [ 00:05:21.886 "71b36927-3df7-47fd-be1d-b19925b5375e" 00:05:21.886 ], 00:05:21.886 "product_name": "Malloc disk", 00:05:21.886 "block_size": 4096, 00:05:21.886 "num_blocks": 256, 00:05:21.886 "uuid": "71b36927-3df7-47fd-be1d-b19925b5375e", 00:05:21.886 "assigned_rate_limits": { 00:05:21.886 "rw_ios_per_sec": 0, 00:05:21.886 "rw_mbytes_per_sec": 0, 00:05:21.886 "r_mbytes_per_sec": 0, 00:05:21.886 "w_mbytes_per_sec": 0 00:05:21.886 }, 00:05:21.886 "claimed": false, 00:05:21.886 "zoned": false, 00:05:21.886 "supported_io_types": { 00:05:21.886 "read": true, 00:05:21.886 "write": true, 00:05:21.886 "unmap": true, 00:05:21.886 "flush": true, 00:05:21.886 "reset": true, 00:05:21.886 "nvme_admin": false, 00:05:21.886 "nvme_io": false, 00:05:21.886 "nvme_io_md": false, 00:05:21.886 "write_zeroes": true, 00:05:21.886 "zcopy": true, 00:05:21.886 "get_zone_info": false, 00:05:21.886 "zone_management": false, 00:05:21.886 "zone_append": false, 00:05:21.886 "compare": false, 00:05:21.886 "compare_and_write": false, 00:05:21.886 "abort": true, 00:05:21.886 "seek_hole": false, 00:05:21.886 "seek_data": false, 00:05:21.886 "copy": true, 00:05:21.886 "nvme_iov_md": false 00:05:21.886 }, 00:05:21.886 "memory_domains": [ 00:05:21.886 { 00:05:21.886 "dma_device_id": "system", 00:05:21.886 "dma_device_type": 1 00:05:21.886 }, 00:05:21.886 { 00:05:21.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.886 "dma_device_type": 2 00:05:21.886 } 00:05:21.886 ], 00:05:21.886 "driver_specific": {} 00:05:21.886 } 00:05:21.886 ]' 00:05:21.886 07:01:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:21.886 07:01:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:21.886 07:01:26 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:21.886 07:01:26 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.886 07:01:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:21.886 07:01:26 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.886 07:01:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:21.886 07:01:26 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.886 07:01:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:21.886 07:01:26 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.886 07:01:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:21.886 07:01:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:21.886 07:01:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:21.886 00:05:21.886 real 0m0.118s 00:05:21.886 user 0m0.067s 00:05:21.886 sys 0m0.019s 00:05:21.886 07:01:26 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:21.886 07:01:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:21.886 ************************************ 00:05:21.886 END TEST rpc_plugins 00:05:21.886 ************************************ 00:05:21.886 07:01:26 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:21.886 07:01:26 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:21.886 07:01:26 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:21.886 07:01:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.886 ************************************ 00:05:21.886 START TEST rpc_trace_cmd_test 00:05:21.886 ************************************ 00:05:21.886 07:01:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:05:21.886 07:01:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:21.886 07:01:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:21.886 07:01:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.886 07:01:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:21.886 07:01:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.886 07:01:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:21.886 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3760972", 00:05:21.886 "tpoint_group_mask": "0x8", 00:05:21.886 "iscsi_conn": { 00:05:21.886 "mask": "0x2", 00:05:21.886 "tpoint_mask": "0x0" 00:05:21.886 }, 00:05:21.886 "scsi": { 00:05:21.886 "mask": "0x4", 00:05:21.886 "tpoint_mask": "0x0" 00:05:21.886 }, 00:05:21.886 "bdev": { 00:05:21.886 "mask": "0x8", 00:05:21.886 "tpoint_mask": "0xffffffffffffffff" 00:05:21.886 }, 00:05:21.886 "nvmf_rdma": { 00:05:21.886 "mask": "0x10", 00:05:21.886 "tpoint_mask": "0x0" 00:05:21.886 }, 00:05:21.886 "nvmf_tcp": { 00:05:21.886 "mask": "0x20", 00:05:21.886 "tpoint_mask": "0x0" 00:05:21.886 }, 00:05:21.886 "ftl": { 00:05:21.886 "mask": "0x40", 00:05:21.886 "tpoint_mask": "0x0" 00:05:21.886 }, 00:05:21.886 "blobfs": { 00:05:21.886 "mask": "0x80", 00:05:21.886 "tpoint_mask": "0x0" 00:05:21.886 }, 00:05:21.886 "dsa": { 00:05:21.886 "mask": "0x200", 00:05:21.886 "tpoint_mask": "0x0" 00:05:21.886 }, 00:05:21.886 "thread": { 00:05:21.886 "mask": "0x400", 00:05:21.886 "tpoint_mask": "0x0" 00:05:21.886 }, 00:05:21.886 "nvme_pcie": { 00:05:21.886 "mask": "0x800", 00:05:21.886 "tpoint_mask": "0x0" 00:05:21.886 }, 00:05:21.886 "iaa": { 00:05:21.886 "mask": "0x1000", 00:05:21.886 "tpoint_mask": "0x0" 00:05:21.886 }, 00:05:21.886 "nvme_tcp": { 00:05:21.886 "mask": "0x2000", 00:05:21.886 "tpoint_mask": "0x0" 00:05:21.886 }, 00:05:21.886 "bdev_nvme": { 00:05:21.886 "mask": "0x4000", 00:05:21.886 "tpoint_mask": "0x0" 00:05:21.886 }, 00:05:21.886 "sock": { 00:05:21.886 "mask": "0x8000", 00:05:21.886 "tpoint_mask": "0x0" 00:05:21.886 }, 00:05:21.886 "blob": { 00:05:21.886 "mask": "0x10000", 00:05:21.886 "tpoint_mask": "0x0" 00:05:21.886 }, 00:05:21.886 "bdev_raid": { 00:05:21.886 "mask": "0x20000", 00:05:21.886 "tpoint_mask": "0x0" 00:05:21.886 }, 00:05:21.886 "scheduler": { 00:05:21.886 "mask": "0x40000", 00:05:21.886 "tpoint_mask": "0x0" 00:05:21.886 } 00:05:21.886 }' 00:05:21.886 07:01:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:21.886 07:01:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:21.886 07:01:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:22.145 07:01:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:22.145 07:01:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:22.145 07:01:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:22.145 07:01:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:22.145 07:01:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:22.145 07:01:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:22.145 07:01:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:22.145 00:05:22.145 real 0m0.230s 00:05:22.145 user 0m0.186s 00:05:22.145 sys 0m0.036s 00:05:22.145 07:01:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:22.145 07:01:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:22.145 ************************************ 00:05:22.145 END TEST rpc_trace_cmd_test 00:05:22.145 ************************************ 00:05:22.145 07:01:26 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:22.145 07:01:26 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:22.145 07:01:26 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:22.145 07:01:26 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:22.145 07:01:26 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:22.145 07:01:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.145 ************************************ 00:05:22.145 START TEST rpc_daemon_integrity 00:05:22.145 ************************************ 00:05:22.145 07:01:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:22.145 07:01:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:22.145 07:01:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.145 07:01:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.145 07:01:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.145 07:01:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:22.145 07:01:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:22.405 07:01:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:22.405 07:01:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:22.405 07:01:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.405 07:01:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.405 07:01:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.405 07:01:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:22.405 07:01:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:22.405 07:01:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.405 07:01:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.405 07:01:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.405 07:01:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:22.405 { 00:05:22.405 "name": "Malloc2", 00:05:22.405 "aliases": [ 00:05:22.405 "abf52bc8-697a-4825-81d8-19f85d541a34" 00:05:22.405 ], 00:05:22.405 "product_name": "Malloc disk", 00:05:22.405 "block_size": 512, 00:05:22.405 "num_blocks": 16384, 00:05:22.405 "uuid": "abf52bc8-697a-4825-81d8-19f85d541a34", 00:05:22.405 "assigned_rate_limits": { 00:05:22.405 "rw_ios_per_sec": 0, 00:05:22.405 "rw_mbytes_per_sec": 0, 00:05:22.405 "r_mbytes_per_sec": 0, 00:05:22.405 "w_mbytes_per_sec": 0 00:05:22.405 }, 00:05:22.405 "claimed": false, 00:05:22.405 "zoned": false, 00:05:22.405 "supported_io_types": { 00:05:22.405 "read": true, 00:05:22.405 "write": true, 00:05:22.405 "unmap": true, 00:05:22.405 "flush": true, 00:05:22.405 "reset": true, 00:05:22.405 "nvme_admin": false, 00:05:22.405 "nvme_io": false, 00:05:22.405 "nvme_io_md": false, 00:05:22.405 "write_zeroes": true, 00:05:22.405 "zcopy": true, 00:05:22.405 "get_zone_info": false, 00:05:22.405 "zone_management": false, 00:05:22.405 "zone_append": false, 00:05:22.405 "compare": false, 00:05:22.405 "compare_and_write": false, 00:05:22.405 "abort": true, 00:05:22.405 "seek_hole": false, 00:05:22.405 "seek_data": false, 00:05:22.405 "copy": true, 00:05:22.405 "nvme_iov_md": false 00:05:22.405 }, 00:05:22.405 "memory_domains": [ 00:05:22.405 { 00:05:22.405 "dma_device_id": "system", 00:05:22.405 "dma_device_type": 1 00:05:22.405 }, 00:05:22.405 { 00:05:22.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:22.405 "dma_device_type": 2 00:05:22.405 } 00:05:22.405 ], 00:05:22.405 "driver_specific": {} 00:05:22.405 } 00:05:22.405 ]' 00:05:22.405 07:01:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:22.405 07:01:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:22.405 07:01:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:22.405 07:01:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.405 07:01:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.405 [2024-11-20 07:01:26.804947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:22.405 [2024-11-20 07:01:26.804986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:22.405 [2024-11-20 07:01:26.805002] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x62ebee0 00:05:22.406 [2024-11-20 07:01:26.805011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:22.406 [2024-11-20 07:01:26.805775] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:22.406 [2024-11-20 07:01:26.805797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:22.406 Passthru0 00:05:22.406 07:01:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.406 07:01:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:22.406 07:01:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.406 07:01:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.406 07:01:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.406 07:01:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:22.406 { 00:05:22.406 "name": "Malloc2", 00:05:22.406 "aliases": [ 00:05:22.406 "abf52bc8-697a-4825-81d8-19f85d541a34" 00:05:22.406 ], 00:05:22.406 "product_name": "Malloc disk", 00:05:22.406 "block_size": 512, 00:05:22.406 "num_blocks": 16384, 00:05:22.406 "uuid": "abf52bc8-697a-4825-81d8-19f85d541a34", 00:05:22.406 "assigned_rate_limits": { 00:05:22.406 "rw_ios_per_sec": 0, 00:05:22.406 "rw_mbytes_per_sec": 0, 00:05:22.406 "r_mbytes_per_sec": 0, 00:05:22.406 "w_mbytes_per_sec": 0 00:05:22.406 }, 00:05:22.406 "claimed": true, 00:05:22.406 "claim_type": "exclusive_write", 00:05:22.406 "zoned": false, 00:05:22.406 "supported_io_types": { 00:05:22.406 "read": true, 00:05:22.406 "write": true, 00:05:22.406 "unmap": true, 00:05:22.406 "flush": true, 00:05:22.406 "reset": true, 00:05:22.406 "nvme_admin": false, 00:05:22.406 "nvme_io": false, 00:05:22.406 "nvme_io_md": false, 00:05:22.406 "write_zeroes": true, 00:05:22.406 "zcopy": true, 00:05:22.406 "get_zone_info": false, 00:05:22.406 "zone_management": false, 00:05:22.406 "zone_append": false, 00:05:22.406 "compare": false, 00:05:22.406 "compare_and_write": false, 00:05:22.406 "abort": true, 00:05:22.406 "seek_hole": false, 00:05:22.406 "seek_data": false, 00:05:22.406 "copy": true, 00:05:22.406 "nvme_iov_md": false 00:05:22.406 }, 00:05:22.406 "memory_domains": [ 00:05:22.406 { 00:05:22.406 "dma_device_id": "system", 00:05:22.406 "dma_device_type": 1 00:05:22.406 }, 00:05:22.406 { 00:05:22.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:22.406 "dma_device_type": 2 00:05:22.406 } 00:05:22.406 ], 00:05:22.406 "driver_specific": {} 00:05:22.406 }, 00:05:22.406 { 00:05:22.406 "name": "Passthru0", 00:05:22.406 "aliases": [ 00:05:22.406 "8371bae6-e6cd-568c-adee-8507a4fb68d5" 00:05:22.406 ], 00:05:22.406 "product_name": "passthru", 00:05:22.406 "block_size": 512, 00:05:22.406 "num_blocks": 16384, 00:05:22.406 "uuid": "8371bae6-e6cd-568c-adee-8507a4fb68d5", 00:05:22.406 "assigned_rate_limits": { 00:05:22.406 "rw_ios_per_sec": 0, 00:05:22.406 "rw_mbytes_per_sec": 0, 00:05:22.406 "r_mbytes_per_sec": 0, 00:05:22.406 "w_mbytes_per_sec": 0 00:05:22.406 }, 00:05:22.406 "claimed": false, 00:05:22.406 "zoned": false, 00:05:22.406 "supported_io_types": { 00:05:22.406 "read": true, 00:05:22.406 "write": true, 00:05:22.406 "unmap": true, 00:05:22.406 "flush": true, 00:05:22.406 "reset": true, 00:05:22.406 "nvme_admin": false, 00:05:22.406 "nvme_io": false, 00:05:22.406 "nvme_io_md": false, 00:05:22.406 "write_zeroes": true, 00:05:22.406 "zcopy": true, 00:05:22.406 "get_zone_info": false, 00:05:22.406 "zone_management": false, 00:05:22.406 "zone_append": false, 00:05:22.406 "compare": false, 00:05:22.406 "compare_and_write": false, 00:05:22.406 "abort": true, 00:05:22.406 "seek_hole": false, 00:05:22.406 "seek_data": false, 00:05:22.406 "copy": true, 00:05:22.406 "nvme_iov_md": false 00:05:22.406 }, 00:05:22.406 "memory_domains": [ 00:05:22.406 { 00:05:22.406 "dma_device_id": "system", 00:05:22.406 "dma_device_type": 1 00:05:22.406 }, 00:05:22.406 { 00:05:22.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:22.406 "dma_device_type": 2 00:05:22.406 } 00:05:22.406 ], 00:05:22.406 "driver_specific": { 00:05:22.406 "passthru": { 00:05:22.406 "name": "Passthru0", 00:05:22.406 "base_bdev_name": "Malloc2" 00:05:22.406 } 00:05:22.406 } 00:05:22.406 } 00:05:22.406 ]' 00:05:22.406 07:01:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:22.406 07:01:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:22.406 07:01:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:22.406 07:01:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.406 07:01:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.406 07:01:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.406 07:01:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:22.406 07:01:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.406 07:01:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.406 07:01:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.406 07:01:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:22.406 07:01:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.406 07:01:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.406 07:01:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.406 07:01:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:22.406 07:01:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:22.406 07:01:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:22.406 00:05:22.406 real 0m0.259s 00:05:22.406 user 0m0.159s 00:05:22.406 sys 0m0.037s 00:05:22.406 07:01:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:22.406 07:01:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.406 ************************************ 00:05:22.406 END TEST rpc_daemon_integrity 00:05:22.406 ************************************ 00:05:22.665 07:01:26 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:22.665 07:01:26 rpc -- rpc/rpc.sh@84 -- # killprocess 3760972 00:05:22.665 07:01:26 rpc -- common/autotest_common.sh@952 -- # '[' -z 3760972 ']' 00:05:22.665 07:01:26 rpc -- common/autotest_common.sh@956 -- # kill -0 3760972 00:05:22.665 07:01:26 rpc -- common/autotest_common.sh@957 -- # uname 00:05:22.665 07:01:26 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:22.665 07:01:26 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3760972 00:05:22.665 07:01:27 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:22.665 07:01:27 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:22.665 07:01:27 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3760972' 00:05:22.665 killing process with pid 3760972 00:05:22.665 07:01:27 rpc -- common/autotest_common.sh@971 -- # kill 3760972 00:05:22.665 07:01:27 rpc -- common/autotest_common.sh@976 -- # wait 3760972 00:05:22.923 00:05:22.923 real 0m2.063s 00:05:22.923 user 0m2.608s 00:05:22.923 sys 0m0.751s 00:05:22.924 07:01:27 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:22.924 07:01:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.924 ************************************ 00:05:22.924 END TEST rpc 00:05:22.924 ************************************ 00:05:22.924 07:01:27 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:22.924 07:01:27 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:22.924 07:01:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:22.924 07:01:27 -- common/autotest_common.sh@10 -- # set +x 00:05:22.924 ************************************ 00:05:22.924 START TEST skip_rpc 00:05:22.924 ************************************ 00:05:22.924 07:01:27 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:23.182 * Looking for test storage... 00:05:23.182 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:23.182 07:01:27 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:23.182 07:01:27 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:23.182 07:01:27 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:23.182 07:01:27 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:23.182 07:01:27 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.182 07:01:27 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.182 07:01:27 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.182 07:01:27 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.182 07:01:27 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.182 07:01:27 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.182 07:01:27 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.182 07:01:27 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.182 07:01:27 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.182 07:01:27 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.182 07:01:27 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.182 07:01:27 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:23.182 07:01:27 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:23.182 07:01:27 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.182 07:01:27 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.182 07:01:27 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:23.182 07:01:27 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:23.182 07:01:27 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.182 07:01:27 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:23.182 07:01:27 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.182 07:01:27 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:23.182 07:01:27 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:23.182 07:01:27 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.182 07:01:27 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:23.182 07:01:27 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.182 07:01:27 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.182 07:01:27 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.182 07:01:27 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:23.182 07:01:27 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.182 07:01:27 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:23.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.182 --rc genhtml_branch_coverage=1 00:05:23.182 --rc genhtml_function_coverage=1 00:05:23.182 --rc genhtml_legend=1 00:05:23.182 --rc geninfo_all_blocks=1 00:05:23.182 --rc geninfo_unexecuted_blocks=1 00:05:23.182 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:23.182 ' 00:05:23.182 07:01:27 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:23.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.182 --rc genhtml_branch_coverage=1 00:05:23.182 --rc genhtml_function_coverage=1 00:05:23.182 --rc genhtml_legend=1 00:05:23.182 --rc geninfo_all_blocks=1 00:05:23.182 --rc geninfo_unexecuted_blocks=1 00:05:23.182 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:23.182 ' 00:05:23.182 07:01:27 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:23.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.182 --rc genhtml_branch_coverage=1 00:05:23.182 --rc genhtml_function_coverage=1 00:05:23.182 --rc genhtml_legend=1 00:05:23.182 --rc geninfo_all_blocks=1 00:05:23.182 --rc geninfo_unexecuted_blocks=1 00:05:23.182 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:23.182 ' 00:05:23.182 07:01:27 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:23.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.182 --rc genhtml_branch_coverage=1 00:05:23.182 --rc genhtml_function_coverage=1 00:05:23.182 --rc genhtml_legend=1 00:05:23.182 --rc geninfo_all_blocks=1 00:05:23.182 --rc geninfo_unexecuted_blocks=1 00:05:23.183 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:23.183 ' 00:05:23.183 07:01:27 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:23.183 07:01:27 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:23.183 07:01:27 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:23.183 07:01:27 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:23.183 07:01:27 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:23.183 07:01:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.183 ************************************ 00:05:23.183 START TEST skip_rpc 00:05:23.183 ************************************ 00:05:23.183 07:01:27 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:05:23.183 07:01:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3761426 00:05:23.183 07:01:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.183 07:01:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:23.183 07:01:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:23.183 [2024-11-20 07:01:27.687953] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:05:23.183 [2024-11-20 07:01:27.688034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3761426 ] 00:05:23.441 [2024-11-20 07:01:27.758261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.441 [2024-11-20 07:01:27.799538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.829 07:01:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:28.829 07:01:32 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:28.829 07:01:32 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:28.829 07:01:32 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:28.829 07:01:32 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:28.829 07:01:32 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:28.829 07:01:32 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:28.829 07:01:32 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:28.829 07:01:32 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.829 07:01:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.829 07:01:32 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:28.829 07:01:32 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:28.829 07:01:32 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:28.829 07:01:32 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:28.829 07:01:32 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:28.829 07:01:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:28.829 07:01:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3761426 00:05:28.829 07:01:32 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 3761426 ']' 00:05:28.829 07:01:32 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 3761426 00:05:28.829 07:01:32 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:05:28.829 07:01:32 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:28.829 07:01:32 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3761426 00:05:28.829 07:01:32 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:28.829 07:01:32 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:28.829 07:01:32 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3761426' 00:05:28.829 killing process with pid 3761426 00:05:28.829 07:01:32 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 3761426 00:05:28.829 07:01:32 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 3761426 00:05:28.829 00:05:28.829 real 0m5.374s 00:05:28.829 user 0m5.138s 00:05:28.829 sys 0m0.287s 00:05:28.829 07:01:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:28.829 07:01:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.830 ************************************ 00:05:28.830 END TEST skip_rpc 00:05:28.830 ************************************ 00:05:28.830 07:01:33 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:28.830 07:01:33 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:28.830 07:01:33 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:28.830 07:01:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.830 ************************************ 00:05:28.830 START TEST skip_rpc_with_json 00:05:28.830 ************************************ 00:05:28.830 07:01:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:05:28.830 07:01:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:28.830 07:01:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3762511 00:05:28.830 07:01:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.830 07:01:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.830 07:01:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3762511 00:05:28.830 07:01:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 3762511 ']' 00:05:28.830 07:01:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.830 07:01:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:28.830 07:01:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.830 07:01:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:28.830 07:01:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:28.830 [2024-11-20 07:01:33.146339] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:05:28.830 [2024-11-20 07:01:33.146398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3762511 ] 00:05:28.830 [2024-11-20 07:01:33.215302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.830 [2024-11-20 07:01:33.253449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.089 07:01:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:29.089 07:01:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:05:29.089 07:01:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:29.089 07:01:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.089 07:01:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:29.089 [2024-11-20 07:01:33.466584] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:29.089 request: 00:05:29.089 { 00:05:29.089 "trtype": "tcp", 00:05:29.089 "method": "nvmf_get_transports", 00:05:29.089 "req_id": 1 00:05:29.089 } 00:05:29.089 Got JSON-RPC error response 00:05:29.089 response: 00:05:29.089 { 00:05:29.089 "code": -19, 00:05:29.089 "message": "No such device" 00:05:29.089 } 00:05:29.089 07:01:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:29.089 07:01:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:29.089 07:01:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.089 07:01:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:29.089 [2024-11-20 07:01:33.478682] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:29.089 07:01:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.089 07:01:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:29.089 07:01:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.089 07:01:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:29.348 07:01:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.348 07:01:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:29.348 { 00:05:29.348 "subsystems": [ 00:05:29.348 { 00:05:29.349 "subsystem": "scheduler", 00:05:29.349 "config": [ 00:05:29.349 { 00:05:29.349 "method": "framework_set_scheduler", 00:05:29.349 "params": { 00:05:29.349 "name": "static" 00:05:29.349 } 00:05:29.349 } 00:05:29.349 ] 00:05:29.349 }, 00:05:29.349 { 00:05:29.349 "subsystem": "vmd", 00:05:29.349 "config": [] 00:05:29.349 }, 00:05:29.349 { 00:05:29.349 "subsystem": "sock", 00:05:29.349 "config": [ 00:05:29.349 { 00:05:29.349 "method": "sock_set_default_impl", 00:05:29.349 "params": { 00:05:29.349 "impl_name": "posix" 00:05:29.349 } 00:05:29.349 }, 00:05:29.349 { 00:05:29.349 "method": "sock_impl_set_options", 00:05:29.349 "params": { 00:05:29.349 "impl_name": "ssl", 00:05:29.349 "recv_buf_size": 4096, 00:05:29.349 "send_buf_size": 4096, 00:05:29.349 "enable_recv_pipe": true, 00:05:29.349 "enable_quickack": false, 00:05:29.349 "enable_placement_id": 0, 00:05:29.349 "enable_zerocopy_send_server": true, 00:05:29.349 "enable_zerocopy_send_client": false, 00:05:29.349 "zerocopy_threshold": 0, 00:05:29.349 "tls_version": 0, 00:05:29.349 "enable_ktls": false 00:05:29.349 } 00:05:29.349 }, 00:05:29.349 { 00:05:29.349 "method": "sock_impl_set_options", 00:05:29.349 "params": { 00:05:29.349 "impl_name": "posix", 00:05:29.349 "recv_buf_size": 2097152, 00:05:29.349 "send_buf_size": 2097152, 00:05:29.349 "enable_recv_pipe": true, 00:05:29.349 "enable_quickack": false, 00:05:29.349 "enable_placement_id": 0, 00:05:29.349 "enable_zerocopy_send_server": true, 00:05:29.349 "enable_zerocopy_send_client": false, 00:05:29.349 "zerocopy_threshold": 0, 00:05:29.349 "tls_version": 0, 00:05:29.349 "enable_ktls": false 00:05:29.349 } 00:05:29.349 } 00:05:29.349 ] 00:05:29.349 }, 00:05:29.349 { 00:05:29.349 "subsystem": "iobuf", 00:05:29.349 "config": [ 00:05:29.349 { 00:05:29.349 "method": "iobuf_set_options", 00:05:29.349 "params": { 00:05:29.349 "small_pool_count": 8192, 00:05:29.349 "large_pool_count": 1024, 00:05:29.349 "small_bufsize": 8192, 00:05:29.349 "large_bufsize": 135168, 00:05:29.349 "enable_numa": false 00:05:29.349 } 00:05:29.349 } 00:05:29.349 ] 00:05:29.349 }, 00:05:29.349 { 00:05:29.349 "subsystem": "keyring", 00:05:29.349 "config": [] 00:05:29.349 }, 00:05:29.349 { 00:05:29.349 "subsystem": "vfio_user_target", 00:05:29.349 "config": null 00:05:29.349 }, 00:05:29.349 { 00:05:29.349 "subsystem": "fsdev", 00:05:29.349 "config": [ 00:05:29.349 { 00:05:29.349 "method": "fsdev_set_opts", 00:05:29.349 "params": { 00:05:29.349 "fsdev_io_pool_size": 65535, 00:05:29.349 "fsdev_io_cache_size": 256 00:05:29.349 } 00:05:29.349 } 00:05:29.349 ] 00:05:29.349 }, 00:05:29.349 { 00:05:29.349 "subsystem": "accel", 00:05:29.349 "config": [ 00:05:29.349 { 00:05:29.349 "method": "accel_set_options", 00:05:29.349 "params": { 00:05:29.349 "small_cache_size": 128, 00:05:29.349 "large_cache_size": 16, 00:05:29.349 "task_count": 2048, 00:05:29.349 "sequence_count": 2048, 00:05:29.349 "buf_count": 2048 00:05:29.349 } 00:05:29.349 } 00:05:29.349 ] 00:05:29.349 }, 00:05:29.349 { 00:05:29.349 "subsystem": "bdev", 00:05:29.349 "config": [ 00:05:29.349 { 00:05:29.349 "method": "bdev_set_options", 00:05:29.349 "params": { 00:05:29.349 "bdev_io_pool_size": 65535, 00:05:29.349 "bdev_io_cache_size": 256, 00:05:29.349 "bdev_auto_examine": true, 00:05:29.349 "iobuf_small_cache_size": 128, 00:05:29.349 "iobuf_large_cache_size": 16 00:05:29.349 } 00:05:29.349 }, 00:05:29.349 { 00:05:29.349 "method": "bdev_raid_set_options", 00:05:29.349 "params": { 00:05:29.349 "process_window_size_kb": 1024, 00:05:29.349 "process_max_bandwidth_mb_sec": 0 00:05:29.349 } 00:05:29.349 }, 00:05:29.349 { 00:05:29.349 "method": "bdev_nvme_set_options", 00:05:29.349 "params": { 00:05:29.349 "action_on_timeout": "none", 00:05:29.349 "timeout_us": 0, 00:05:29.349 "timeout_admin_us": 0, 00:05:29.349 "keep_alive_timeout_ms": 10000, 00:05:29.349 "arbitration_burst": 0, 00:05:29.349 "low_priority_weight": 0, 00:05:29.349 "medium_priority_weight": 0, 00:05:29.349 "high_priority_weight": 0, 00:05:29.349 "nvme_adminq_poll_period_us": 10000, 00:05:29.349 "nvme_ioq_poll_period_us": 0, 00:05:29.349 "io_queue_requests": 0, 00:05:29.349 "delay_cmd_submit": true, 00:05:29.349 "transport_retry_count": 4, 00:05:29.349 "bdev_retry_count": 3, 00:05:29.349 "transport_ack_timeout": 0, 00:05:29.349 "ctrlr_loss_timeout_sec": 0, 00:05:29.349 "reconnect_delay_sec": 0, 00:05:29.349 "fast_io_fail_timeout_sec": 0, 00:05:29.349 "disable_auto_failback": false, 00:05:29.349 "generate_uuids": false, 00:05:29.349 "transport_tos": 0, 00:05:29.349 "nvme_error_stat": false, 00:05:29.349 "rdma_srq_size": 0, 00:05:29.349 "io_path_stat": false, 00:05:29.349 "allow_accel_sequence": false, 00:05:29.349 "rdma_max_cq_size": 0, 00:05:29.349 "rdma_cm_event_timeout_ms": 0, 00:05:29.349 "dhchap_digests": [ 00:05:29.349 "sha256", 00:05:29.349 "sha384", 00:05:29.349 "sha512" 00:05:29.349 ], 00:05:29.349 "dhchap_dhgroups": [ 00:05:29.349 "null", 00:05:29.349 "ffdhe2048", 00:05:29.349 "ffdhe3072", 00:05:29.349 "ffdhe4096", 00:05:29.349 "ffdhe6144", 00:05:29.349 "ffdhe8192" 00:05:29.349 ] 00:05:29.349 } 00:05:29.349 }, 00:05:29.349 { 00:05:29.349 "method": "bdev_nvme_set_hotplug", 00:05:29.349 "params": { 00:05:29.349 "period_us": 100000, 00:05:29.349 "enable": false 00:05:29.349 } 00:05:29.349 }, 00:05:29.349 { 00:05:29.349 "method": "bdev_iscsi_set_options", 00:05:29.349 "params": { 00:05:29.349 "timeout_sec": 30 00:05:29.349 } 00:05:29.349 }, 00:05:29.349 { 00:05:29.349 "method": "bdev_wait_for_examine" 00:05:29.349 } 00:05:29.349 ] 00:05:29.349 }, 00:05:29.349 { 00:05:29.349 "subsystem": "nvmf", 00:05:29.349 "config": [ 00:05:29.349 { 00:05:29.349 "method": "nvmf_set_config", 00:05:29.349 "params": { 00:05:29.349 "discovery_filter": "match_any", 00:05:29.349 "admin_cmd_passthru": { 00:05:29.349 "identify_ctrlr": false 00:05:29.349 }, 00:05:29.349 "dhchap_digests": [ 00:05:29.349 "sha256", 00:05:29.349 "sha384", 00:05:29.349 "sha512" 00:05:29.349 ], 00:05:29.349 "dhchap_dhgroups": [ 00:05:29.349 "null", 00:05:29.349 "ffdhe2048", 00:05:29.349 "ffdhe3072", 00:05:29.349 "ffdhe4096", 00:05:29.349 "ffdhe6144", 00:05:29.349 "ffdhe8192" 00:05:29.349 ] 00:05:29.349 } 00:05:29.349 }, 00:05:29.349 { 00:05:29.349 "method": "nvmf_set_max_subsystems", 00:05:29.349 "params": { 00:05:29.349 "max_subsystems": 1024 00:05:29.349 } 00:05:29.349 }, 00:05:29.349 { 00:05:29.349 "method": "nvmf_set_crdt", 00:05:29.349 "params": { 00:05:29.349 "crdt1": 0, 00:05:29.349 "crdt2": 0, 00:05:29.349 "crdt3": 0 00:05:29.349 } 00:05:29.349 }, 00:05:29.349 { 00:05:29.349 "method": "nvmf_create_transport", 00:05:29.349 "params": { 00:05:29.349 "trtype": "TCP", 00:05:29.349 "max_queue_depth": 128, 00:05:29.349 "max_io_qpairs_per_ctrlr": 127, 00:05:29.349 "in_capsule_data_size": 4096, 00:05:29.349 "max_io_size": 131072, 00:05:29.349 "io_unit_size": 131072, 00:05:29.349 "max_aq_depth": 128, 00:05:29.349 "num_shared_buffers": 511, 00:05:29.349 "buf_cache_size": 4294967295, 00:05:29.349 "dif_insert_or_strip": false, 00:05:29.349 "zcopy": false, 00:05:29.349 "c2h_success": true, 00:05:29.349 "sock_priority": 0, 00:05:29.349 "abort_timeout_sec": 1, 00:05:29.349 "ack_timeout": 0, 00:05:29.349 "data_wr_pool_size": 0 00:05:29.349 } 00:05:29.349 } 00:05:29.349 ] 00:05:29.349 }, 00:05:29.349 { 00:05:29.349 "subsystem": "nbd", 00:05:29.349 "config": [] 00:05:29.349 }, 00:05:29.349 { 00:05:29.349 "subsystem": "ublk", 00:05:29.349 "config": [] 00:05:29.349 }, 00:05:29.349 { 00:05:29.349 "subsystem": "vhost_blk", 00:05:29.349 "config": [] 00:05:29.349 }, 00:05:29.349 { 00:05:29.349 "subsystem": "scsi", 00:05:29.349 "config": null 00:05:29.349 }, 00:05:29.349 { 00:05:29.349 "subsystem": "iscsi", 00:05:29.349 "config": [ 00:05:29.349 { 00:05:29.349 "method": "iscsi_set_options", 00:05:29.349 "params": { 00:05:29.349 "node_base": "iqn.2016-06.io.spdk", 00:05:29.349 "max_sessions": 128, 00:05:29.349 "max_connections_per_session": 2, 00:05:29.349 "max_queue_depth": 64, 00:05:29.349 "default_time2wait": 2, 00:05:29.349 "default_time2retain": 20, 00:05:29.349 "first_burst_length": 8192, 00:05:29.349 "immediate_data": true, 00:05:29.349 "allow_duplicated_isid": false, 00:05:29.349 "error_recovery_level": 0, 00:05:29.349 "nop_timeout": 60, 00:05:29.349 "nop_in_interval": 30, 00:05:29.349 "disable_chap": false, 00:05:29.349 "require_chap": false, 00:05:29.349 "mutual_chap": false, 00:05:29.349 "chap_group": 0, 00:05:29.350 "max_large_datain_per_connection": 64, 00:05:29.350 "max_r2t_per_connection": 4, 00:05:29.350 "pdu_pool_size": 36864, 00:05:29.350 "immediate_data_pool_size": 16384, 00:05:29.350 "data_out_pool_size": 2048 00:05:29.350 } 00:05:29.350 } 00:05:29.350 ] 00:05:29.350 }, 00:05:29.350 { 00:05:29.350 "subsystem": "vhost_scsi", 00:05:29.350 "config": [] 00:05:29.350 } 00:05:29.350 ] 00:05:29.350 } 00:05:29.350 07:01:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:29.350 07:01:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3762511 00:05:29.350 07:01:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 3762511 ']' 00:05:29.350 07:01:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 3762511 00:05:29.350 07:01:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:29.350 07:01:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:29.350 07:01:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3762511 00:05:29.350 07:01:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:29.350 07:01:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:29.350 07:01:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3762511' 00:05:29.350 killing process with pid 3762511 00:05:29.350 07:01:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 3762511 00:05:29.350 07:01:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 3762511 00:05:29.609 07:01:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3762537 00:05:29.609 07:01:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:29.609 07:01:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:34.879 07:01:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3762537 00:05:34.879 07:01:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 3762537 ']' 00:05:34.879 07:01:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 3762537 00:05:34.879 07:01:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:34.879 07:01:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:34.879 07:01:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3762537 00:05:34.879 07:01:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:34.879 07:01:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:34.879 07:01:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3762537' 00:05:34.879 killing process with pid 3762537 00:05:34.879 07:01:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 3762537 00:05:34.879 07:01:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 3762537 00:05:34.879 07:01:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:34.879 07:01:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:34.879 00:05:34.879 real 0m6.272s 00:05:34.879 user 0m5.952s 00:05:34.879 sys 0m0.637s 00:05:34.879 07:01:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:34.879 07:01:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:34.879 ************************************ 00:05:34.879 END TEST skip_rpc_with_json 00:05:34.879 ************************************ 00:05:34.879 07:01:39 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:34.879 07:01:39 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:34.879 07:01:39 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:35.139 07:01:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.139 ************************************ 00:05:35.139 START TEST skip_rpc_with_delay 00:05:35.139 ************************************ 00:05:35.139 07:01:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:05:35.139 07:01:39 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:35.139 07:01:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:35.139 07:01:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:35.139 07:01:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.139 07:01:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.139 07:01:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.139 07:01:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.139 07:01:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.139 07:01:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.139 07:01:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.139 07:01:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:35.139 07:01:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:35.139 [2024-11-20 07:01:39.503815] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:35.139 07:01:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:35.139 07:01:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:35.139 07:01:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:35.139 07:01:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:35.139 00:05:35.139 real 0m0.046s 00:05:35.139 user 0m0.020s 00:05:35.139 sys 0m0.027s 00:05:35.139 07:01:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:35.139 07:01:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:35.139 ************************************ 00:05:35.139 END TEST skip_rpc_with_delay 00:05:35.139 ************************************ 00:05:35.139 07:01:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:35.139 07:01:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:35.139 07:01:39 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:35.139 07:01:39 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:35.139 07:01:39 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:35.139 07:01:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.139 ************************************ 00:05:35.139 START TEST exit_on_failed_rpc_init 00:05:35.139 ************************************ 00:05:35.139 07:01:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:05:35.139 07:01:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3763647 00:05:35.139 07:01:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:35.139 07:01:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3763647 00:05:35.139 07:01:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 3763647 ']' 00:05:35.139 07:01:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.139 07:01:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:35.139 07:01:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.139 07:01:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:35.139 07:01:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:35.139 [2024-11-20 07:01:39.629617] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:05:35.139 [2024-11-20 07:01:39.629675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3763647 ] 00:05:35.400 [2024-11-20 07:01:39.701609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.400 [2024-11-20 07:01:39.746845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.400 07:01:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:35.400 07:01:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:05:35.400 07:01:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:35.400 07:01:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:35.400 07:01:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:35.400 07:01:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:35.400 07:01:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.658 07:01:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.658 07:01:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.658 07:01:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.658 07:01:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.658 07:01:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.658 07:01:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.658 07:01:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:35.658 07:01:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:35.658 [2024-11-20 07:01:39.987486] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:05:35.658 [2024-11-20 07:01:39.987569] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3763657 ] 00:05:35.658 [2024-11-20 07:01:40.062317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.658 [2024-11-20 07:01:40.106048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.658 [2024-11-20 07:01:40.106134] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:35.658 [2024-11-20 07:01:40.106147] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:35.658 [2024-11-20 07:01:40.106155] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:35.658 07:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:35.659 07:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:35.659 07:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:35.659 07:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:35.659 07:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:35.659 07:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:35.659 07:01:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:35.659 07:01:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3763647 00:05:35.659 07:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 3763647 ']' 00:05:35.659 07:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 3763647 00:05:35.659 07:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:05:35.659 07:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:35.659 07:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3763647 00:05:35.917 07:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:35.917 07:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:35.917 07:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3763647' 00:05:35.917 killing process with pid 3763647 00:05:35.917 07:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 3763647 00:05:35.917 07:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 3763647 00:05:36.175 00:05:36.175 real 0m0.902s 00:05:36.175 user 0m0.928s 00:05:36.175 sys 0m0.417s 00:05:36.175 07:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:36.175 07:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:36.175 ************************************ 00:05:36.175 END TEST exit_on_failed_rpc_init 00:05:36.175 ************************************ 00:05:36.175 07:01:40 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:36.175 00:05:36.175 real 0m13.137s 00:05:36.175 user 0m12.260s 00:05:36.175 sys 0m1.726s 00:05:36.176 07:01:40 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:36.176 07:01:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.176 ************************************ 00:05:36.176 END TEST skip_rpc 00:05:36.176 ************************************ 00:05:36.176 07:01:40 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:36.176 07:01:40 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:36.176 07:01:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:36.176 07:01:40 -- common/autotest_common.sh@10 -- # set +x 00:05:36.176 ************************************ 00:05:36.176 START TEST rpc_client 00:05:36.176 ************************************ 00:05:36.176 07:01:40 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:36.176 * Looking for test storage... 00:05:36.435 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:05:36.435 07:01:40 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:36.435 07:01:40 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:05:36.435 07:01:40 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:36.435 07:01:40 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:36.435 07:01:40 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.435 07:01:40 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.435 07:01:40 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.435 07:01:40 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.435 07:01:40 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.435 07:01:40 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.435 07:01:40 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.435 07:01:40 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.435 07:01:40 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.435 07:01:40 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.435 07:01:40 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.435 07:01:40 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:36.435 07:01:40 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:36.435 07:01:40 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.435 07:01:40 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.435 07:01:40 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:36.435 07:01:40 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:36.435 07:01:40 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.435 07:01:40 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:36.435 07:01:40 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.435 07:01:40 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:36.435 07:01:40 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:36.435 07:01:40 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.435 07:01:40 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:36.435 07:01:40 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.435 07:01:40 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.435 07:01:40 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.435 07:01:40 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:36.435 07:01:40 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.435 07:01:40 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:36.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.435 --rc genhtml_branch_coverage=1 00:05:36.435 --rc genhtml_function_coverage=1 00:05:36.435 --rc genhtml_legend=1 00:05:36.435 --rc geninfo_all_blocks=1 00:05:36.435 --rc geninfo_unexecuted_blocks=1 00:05:36.435 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:36.435 ' 00:05:36.435 07:01:40 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:36.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.435 --rc genhtml_branch_coverage=1 00:05:36.435 --rc genhtml_function_coverage=1 00:05:36.435 --rc genhtml_legend=1 00:05:36.435 --rc geninfo_all_blocks=1 00:05:36.435 --rc geninfo_unexecuted_blocks=1 00:05:36.435 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:36.435 ' 00:05:36.435 07:01:40 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:36.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.435 --rc genhtml_branch_coverage=1 00:05:36.435 --rc genhtml_function_coverage=1 00:05:36.435 --rc genhtml_legend=1 00:05:36.435 --rc geninfo_all_blocks=1 00:05:36.435 --rc geninfo_unexecuted_blocks=1 00:05:36.435 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:36.435 ' 00:05:36.435 07:01:40 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:36.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.435 --rc genhtml_branch_coverage=1 00:05:36.435 --rc genhtml_function_coverage=1 00:05:36.435 --rc genhtml_legend=1 00:05:36.435 --rc geninfo_all_blocks=1 00:05:36.435 --rc geninfo_unexecuted_blocks=1 00:05:36.435 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:36.435 ' 00:05:36.435 07:01:40 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:36.435 OK 00:05:36.435 07:01:40 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:36.435 00:05:36.435 real 0m0.210s 00:05:36.435 user 0m0.122s 00:05:36.435 sys 0m0.101s 00:05:36.435 07:01:40 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:36.435 07:01:40 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:36.435 ************************************ 00:05:36.435 END TEST rpc_client 00:05:36.435 ************************************ 00:05:36.435 07:01:40 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:36.435 07:01:40 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:36.435 07:01:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:36.435 07:01:40 -- common/autotest_common.sh@10 -- # set +x 00:05:36.435 ************************************ 00:05:36.435 START TEST json_config 00:05:36.435 ************************************ 00:05:36.435 07:01:40 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:36.695 07:01:41 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:36.695 07:01:41 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:05:36.695 07:01:41 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:36.695 07:01:41 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:36.695 07:01:41 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.695 07:01:41 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.695 07:01:41 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.695 07:01:41 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.695 07:01:41 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.695 07:01:41 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.695 07:01:41 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.695 07:01:41 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.695 07:01:41 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.695 07:01:41 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.695 07:01:41 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.695 07:01:41 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:36.695 07:01:41 json_config -- scripts/common.sh@345 -- # : 1 00:05:36.695 07:01:41 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.695 07:01:41 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.695 07:01:41 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:36.695 07:01:41 json_config -- scripts/common.sh@353 -- # local d=1 00:05:36.695 07:01:41 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.695 07:01:41 json_config -- scripts/common.sh@355 -- # echo 1 00:05:36.695 07:01:41 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.695 07:01:41 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:36.695 07:01:41 json_config -- scripts/common.sh@353 -- # local d=2 00:05:36.695 07:01:41 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.695 07:01:41 json_config -- scripts/common.sh@355 -- # echo 2 00:05:36.695 07:01:41 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.695 07:01:41 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.695 07:01:41 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.695 07:01:41 json_config -- scripts/common.sh@368 -- # return 0 00:05:36.695 07:01:41 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.695 07:01:41 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:36.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.695 --rc genhtml_branch_coverage=1 00:05:36.695 --rc genhtml_function_coverage=1 00:05:36.695 --rc genhtml_legend=1 00:05:36.695 --rc geninfo_all_blocks=1 00:05:36.695 --rc geninfo_unexecuted_blocks=1 00:05:36.695 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:36.695 ' 00:05:36.695 07:01:41 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:36.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.695 --rc genhtml_branch_coverage=1 00:05:36.695 --rc genhtml_function_coverage=1 00:05:36.695 --rc genhtml_legend=1 00:05:36.695 --rc geninfo_all_blocks=1 00:05:36.695 --rc geninfo_unexecuted_blocks=1 00:05:36.695 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:36.695 ' 00:05:36.695 07:01:41 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:36.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.695 --rc genhtml_branch_coverage=1 00:05:36.695 --rc genhtml_function_coverage=1 00:05:36.695 --rc genhtml_legend=1 00:05:36.695 --rc geninfo_all_blocks=1 00:05:36.695 --rc geninfo_unexecuted_blocks=1 00:05:36.695 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:36.695 ' 00:05:36.695 07:01:41 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:36.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.695 --rc genhtml_branch_coverage=1 00:05:36.695 --rc genhtml_function_coverage=1 00:05:36.695 --rc genhtml_legend=1 00:05:36.695 --rc geninfo_all_blocks=1 00:05:36.695 --rc geninfo_unexecuted_blocks=1 00:05:36.695 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:36.695 ' 00:05:36.695 07:01:41 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:36.695 07:01:41 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:36.695 07:01:41 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:36.695 07:01:41 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:36.695 07:01:41 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:36.695 07:01:41 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:36.695 07:01:41 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:36.695 07:01:41 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:36.695 07:01:41 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:36.695 07:01:41 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:36.695 07:01:41 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:36.695 07:01:41 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:36.695 07:01:41 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:05:36.695 07:01:41 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:05:36.695 07:01:41 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:36.695 07:01:41 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:36.695 07:01:41 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:36.695 07:01:41 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:36.695 07:01:41 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:36.695 07:01:41 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:36.695 07:01:41 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:36.695 07:01:41 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:36.695 07:01:41 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:36.695 07:01:41 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.696 07:01:41 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.696 07:01:41 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.696 07:01:41 json_config -- paths/export.sh@5 -- # export PATH 00:05:36.696 07:01:41 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.696 07:01:41 json_config -- nvmf/common.sh@51 -- # : 0 00:05:36.696 07:01:41 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:36.696 07:01:41 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:36.696 07:01:41 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:36.696 07:01:41 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:36.696 07:01:41 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:36.696 07:01:41 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:36.696 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:36.696 07:01:41 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:36.696 07:01:41 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:36.696 07:01:41 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:36.696 07:01:41 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:05:36.696 07:01:41 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:36.696 07:01:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:36.696 07:01:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:36.696 07:01:41 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:36.696 07:01:41 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:36.696 WARNING: No tests are enabled so not running JSON configuration tests 00:05:36.696 07:01:41 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:36.696 00:05:36.696 real 0m0.197s 00:05:36.696 user 0m0.110s 00:05:36.696 sys 0m0.094s 00:05:36.696 07:01:41 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:36.696 07:01:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.696 ************************************ 00:05:36.696 END TEST json_config 00:05:36.696 ************************************ 00:05:36.696 07:01:41 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:36.696 07:01:41 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:36.696 07:01:41 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:36.696 07:01:41 -- common/autotest_common.sh@10 -- # set +x 00:05:36.696 ************************************ 00:05:36.696 START TEST json_config_extra_key 00:05:36.696 ************************************ 00:05:36.696 07:01:41 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:36.956 07:01:41 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:36.957 07:01:41 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:05:36.957 07:01:41 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:36.957 07:01:41 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:36.957 07:01:41 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.957 07:01:41 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.957 07:01:41 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.957 07:01:41 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.957 07:01:41 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.957 07:01:41 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.957 07:01:41 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.957 07:01:41 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.957 07:01:41 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.957 07:01:41 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.957 07:01:41 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.957 07:01:41 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:36.957 07:01:41 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:36.957 07:01:41 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.957 07:01:41 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.957 07:01:41 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:36.957 07:01:41 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:36.957 07:01:41 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.957 07:01:41 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:36.957 07:01:41 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.957 07:01:41 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:36.957 07:01:41 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:36.957 07:01:41 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.957 07:01:41 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:36.957 07:01:41 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.957 07:01:41 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.957 07:01:41 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.957 07:01:41 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:36.957 07:01:41 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.957 07:01:41 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:36.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.957 --rc genhtml_branch_coverage=1 00:05:36.957 --rc genhtml_function_coverage=1 00:05:36.957 --rc genhtml_legend=1 00:05:36.957 --rc geninfo_all_blocks=1 00:05:36.957 --rc geninfo_unexecuted_blocks=1 00:05:36.957 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:36.957 ' 00:05:36.957 07:01:41 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:36.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.957 --rc genhtml_branch_coverage=1 00:05:36.957 --rc genhtml_function_coverage=1 00:05:36.957 --rc genhtml_legend=1 00:05:36.957 --rc geninfo_all_blocks=1 00:05:36.957 --rc geninfo_unexecuted_blocks=1 00:05:36.957 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:36.957 ' 00:05:36.957 07:01:41 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:36.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.957 --rc genhtml_branch_coverage=1 00:05:36.957 --rc genhtml_function_coverage=1 00:05:36.957 --rc genhtml_legend=1 00:05:36.957 --rc geninfo_all_blocks=1 00:05:36.957 --rc geninfo_unexecuted_blocks=1 00:05:36.957 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:36.957 ' 00:05:36.957 07:01:41 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:36.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.957 --rc genhtml_branch_coverage=1 00:05:36.957 --rc genhtml_function_coverage=1 00:05:36.957 --rc genhtml_legend=1 00:05:36.957 --rc geninfo_all_blocks=1 00:05:36.957 --rc geninfo_unexecuted_blocks=1 00:05:36.957 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:36.957 ' 00:05:36.957 07:01:41 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:36.957 07:01:41 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:36.957 07:01:41 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:36.957 07:01:41 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:36.957 07:01:41 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:36.957 07:01:41 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:36.957 07:01:41 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:36.957 07:01:41 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:36.957 07:01:41 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:36.957 07:01:41 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:36.957 07:01:41 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:36.957 07:01:41 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:36.957 07:01:41 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:05:36.957 07:01:41 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:05:36.957 07:01:41 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:36.957 07:01:41 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:36.957 07:01:41 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:36.957 07:01:41 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:36.957 07:01:41 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:36.957 07:01:41 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:36.957 07:01:41 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:36.957 07:01:41 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:36.957 07:01:41 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:36.957 07:01:41 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.957 07:01:41 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.957 07:01:41 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.957 07:01:41 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:36.957 07:01:41 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.957 07:01:41 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:36.957 07:01:41 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:36.957 07:01:41 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:36.957 07:01:41 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:36.957 07:01:41 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:36.957 07:01:41 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:36.957 07:01:41 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:36.957 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:36.957 07:01:41 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:36.957 07:01:41 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:36.958 07:01:41 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:36.958 07:01:41 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:05:36.958 07:01:41 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:36.958 07:01:41 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:36.958 07:01:41 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:36.958 07:01:41 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:36.958 07:01:41 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:36.958 07:01:41 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:36.958 07:01:41 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:36.958 07:01:41 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:36.958 07:01:41 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:36.958 07:01:41 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:36.958 INFO: launching applications... 00:05:36.958 07:01:41 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:36.958 07:01:41 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:36.958 07:01:41 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:36.958 07:01:41 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:36.958 07:01:41 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:36.958 07:01:41 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:36.958 07:01:41 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.958 07:01:41 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.958 07:01:41 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3764092 00:05:36.958 07:01:41 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:36.958 Waiting for target to run... 00:05:36.958 07:01:41 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3764092 /var/tmp/spdk_tgt.sock 00:05:36.958 07:01:41 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 3764092 ']' 00:05:36.958 07:01:41 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:36.958 07:01:41 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:36.958 07:01:41 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:36.958 07:01:41 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:36.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:36.958 07:01:41 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:36.958 07:01:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:36.958 [2024-11-20 07:01:41.430605] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:05:36.958 [2024-11-20 07:01:41.430671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3764092 ] 00:05:37.524 [2024-11-20 07:01:41.862016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.524 [2024-11-20 07:01:41.917622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.783 07:01:42 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:37.783 07:01:42 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:05:37.783 07:01:42 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:37.783 00:05:37.783 07:01:42 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:37.783 INFO: shutting down applications... 00:05:37.783 07:01:42 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:37.783 07:01:42 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:37.783 07:01:42 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:37.783 07:01:42 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3764092 ]] 00:05:37.783 07:01:42 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3764092 00:05:37.783 07:01:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:37.783 07:01:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.783 07:01:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3764092 00:05:37.783 07:01:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:38.351 07:01:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:38.351 07:01:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:38.351 07:01:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3764092 00:05:38.351 07:01:42 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:38.351 07:01:42 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:38.351 07:01:42 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:38.351 07:01:42 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:38.351 SPDK target shutdown done 00:05:38.351 07:01:42 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:38.351 Success 00:05:38.351 00:05:38.351 real 0m1.560s 00:05:38.351 user 0m1.165s 00:05:38.351 sys 0m0.552s 00:05:38.351 07:01:42 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:38.351 07:01:42 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:38.351 ************************************ 00:05:38.351 END TEST json_config_extra_key 00:05:38.351 ************************************ 00:05:38.351 07:01:42 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:38.351 07:01:42 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:38.351 07:01:42 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:38.351 07:01:42 -- common/autotest_common.sh@10 -- # set +x 00:05:38.351 ************************************ 00:05:38.351 START TEST alias_rpc 00:05:38.351 ************************************ 00:05:38.351 07:01:42 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:38.609 * Looking for test storage... 00:05:38.609 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:05:38.609 07:01:42 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:38.610 07:01:42 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:38.610 07:01:42 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:38.610 07:01:43 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:38.610 07:01:43 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.610 07:01:43 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.610 07:01:43 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.610 07:01:43 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.610 07:01:43 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.610 07:01:43 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.610 07:01:43 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.610 07:01:43 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.610 07:01:43 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.610 07:01:43 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.610 07:01:43 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.610 07:01:43 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:38.610 07:01:43 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:38.610 07:01:43 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.610 07:01:43 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.610 07:01:43 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:38.610 07:01:43 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:38.610 07:01:43 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.610 07:01:43 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:38.610 07:01:43 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.610 07:01:43 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:38.610 07:01:43 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:38.610 07:01:43 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.610 07:01:43 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:38.610 07:01:43 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.610 07:01:43 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.610 07:01:43 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.610 07:01:43 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:38.610 07:01:43 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.610 07:01:43 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:38.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.610 --rc genhtml_branch_coverage=1 00:05:38.610 --rc genhtml_function_coverage=1 00:05:38.610 --rc genhtml_legend=1 00:05:38.610 --rc geninfo_all_blocks=1 00:05:38.610 --rc geninfo_unexecuted_blocks=1 00:05:38.610 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:38.610 ' 00:05:38.610 07:01:43 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:38.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.610 --rc genhtml_branch_coverage=1 00:05:38.610 --rc genhtml_function_coverage=1 00:05:38.610 --rc genhtml_legend=1 00:05:38.610 --rc geninfo_all_blocks=1 00:05:38.610 --rc geninfo_unexecuted_blocks=1 00:05:38.610 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:38.610 ' 00:05:38.610 07:01:43 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:38.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.610 --rc genhtml_branch_coverage=1 00:05:38.610 --rc genhtml_function_coverage=1 00:05:38.610 --rc genhtml_legend=1 00:05:38.610 --rc geninfo_all_blocks=1 00:05:38.610 --rc geninfo_unexecuted_blocks=1 00:05:38.610 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:38.610 ' 00:05:38.610 07:01:43 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:38.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.610 --rc genhtml_branch_coverage=1 00:05:38.610 --rc genhtml_function_coverage=1 00:05:38.610 --rc genhtml_legend=1 00:05:38.610 --rc geninfo_all_blocks=1 00:05:38.610 --rc geninfo_unexecuted_blocks=1 00:05:38.610 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:38.610 ' 00:05:38.610 07:01:43 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:38.610 07:01:43 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3764416 00:05:38.610 07:01:43 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3764416 00:05:38.610 07:01:43 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:38.610 07:01:43 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 3764416 ']' 00:05:38.610 07:01:43 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.610 07:01:43 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:38.610 07:01:43 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.610 07:01:43 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:38.610 07:01:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.610 [2024-11-20 07:01:43.057206] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:05:38.610 [2024-11-20 07:01:43.057282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3764416 ] 00:05:38.610 [2024-11-20 07:01:43.129574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.869 [2024-11-20 07:01:43.172816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.869 07:01:43 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:38.869 07:01:43 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:38.869 07:01:43 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:39.128 07:01:43 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3764416 00:05:39.128 07:01:43 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 3764416 ']' 00:05:39.128 07:01:43 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 3764416 00:05:39.128 07:01:43 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:05:39.129 07:01:43 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:39.129 07:01:43 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3764416 00:05:39.129 07:01:43 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:39.129 07:01:43 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:39.129 07:01:43 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3764416' 00:05:39.129 killing process with pid 3764416 00:05:39.129 07:01:43 alias_rpc -- common/autotest_common.sh@971 -- # kill 3764416 00:05:39.129 07:01:43 alias_rpc -- common/autotest_common.sh@976 -- # wait 3764416 00:05:39.388 00:05:39.388 real 0m1.078s 00:05:39.388 user 0m1.060s 00:05:39.388 sys 0m0.441s 00:05:39.388 07:01:43 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:39.388 07:01:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.388 ************************************ 00:05:39.388 END TEST alias_rpc 00:05:39.388 ************************************ 00:05:39.646 07:01:43 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:39.646 07:01:43 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:39.646 07:01:43 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:39.646 07:01:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:39.646 07:01:43 -- common/autotest_common.sh@10 -- # set +x 00:05:39.646 ************************************ 00:05:39.646 START TEST spdkcli_tcp 00:05:39.646 ************************************ 00:05:39.646 07:01:44 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:39.646 * Looking for test storage... 00:05:39.646 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:05:39.646 07:01:44 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:39.646 07:01:44 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:39.646 07:01:44 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:39.646 07:01:44 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:39.646 07:01:44 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.646 07:01:44 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.646 07:01:44 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.646 07:01:44 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.646 07:01:44 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.646 07:01:44 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.646 07:01:44 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.646 07:01:44 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.646 07:01:44 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.646 07:01:44 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.646 07:01:44 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.646 07:01:44 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:39.646 07:01:44 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:39.646 07:01:44 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.646 07:01:44 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.646 07:01:44 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:39.646 07:01:44 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:39.646 07:01:44 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.646 07:01:44 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:39.646 07:01:44 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.646 07:01:44 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:39.646 07:01:44 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:39.646 07:01:44 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.646 07:01:44 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:39.646 07:01:44 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.646 07:01:44 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.646 07:01:44 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.646 07:01:44 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:39.646 07:01:44 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.646 07:01:44 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:39.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.646 --rc genhtml_branch_coverage=1 00:05:39.646 --rc genhtml_function_coverage=1 00:05:39.646 --rc genhtml_legend=1 00:05:39.646 --rc geninfo_all_blocks=1 00:05:39.646 --rc geninfo_unexecuted_blocks=1 00:05:39.646 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:39.646 ' 00:05:39.646 07:01:44 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:39.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.646 --rc genhtml_branch_coverage=1 00:05:39.646 --rc genhtml_function_coverage=1 00:05:39.646 --rc genhtml_legend=1 00:05:39.646 --rc geninfo_all_blocks=1 00:05:39.646 --rc geninfo_unexecuted_blocks=1 00:05:39.646 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:39.646 ' 00:05:39.646 07:01:44 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:39.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.646 --rc genhtml_branch_coverage=1 00:05:39.646 --rc genhtml_function_coverage=1 00:05:39.646 --rc genhtml_legend=1 00:05:39.646 --rc geninfo_all_blocks=1 00:05:39.646 --rc geninfo_unexecuted_blocks=1 00:05:39.646 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:39.646 ' 00:05:39.646 07:01:44 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:39.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.646 --rc genhtml_branch_coverage=1 00:05:39.646 --rc genhtml_function_coverage=1 00:05:39.646 --rc genhtml_legend=1 00:05:39.646 --rc geninfo_all_blocks=1 00:05:39.646 --rc geninfo_unexecuted_blocks=1 00:05:39.646 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:39.646 ' 00:05:39.905 07:01:44 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:05:39.905 07:01:44 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:39.905 07:01:44 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:05:39.906 07:01:44 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:39.906 07:01:44 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:39.906 07:01:44 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:39.906 07:01:44 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:39.906 07:01:44 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:39.906 07:01:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:39.906 07:01:44 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3764737 00:05:39.906 07:01:44 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3764737 00:05:39.906 07:01:44 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:39.906 07:01:44 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 3764737 ']' 00:05:39.906 07:01:44 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.906 07:01:44 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:39.906 07:01:44 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.906 07:01:44 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:39.906 07:01:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:39.906 [2024-11-20 07:01:44.234955] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:05:39.906 [2024-11-20 07:01:44.235021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3764737 ] 00:05:39.906 [2024-11-20 07:01:44.304728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.906 [2024-11-20 07:01:44.345422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.906 [2024-11-20 07:01:44.345424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.165 07:01:44 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:40.165 07:01:44 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:05:40.165 07:01:44 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3764747 00:05:40.165 07:01:44 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:40.165 07:01:44 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:40.165 [ 00:05:40.165 "spdk_get_version", 00:05:40.165 "rpc_get_methods", 00:05:40.165 "notify_get_notifications", 00:05:40.165 "notify_get_types", 00:05:40.165 "trace_get_info", 00:05:40.165 "trace_get_tpoint_group_mask", 00:05:40.165 "trace_disable_tpoint_group", 00:05:40.165 "trace_enable_tpoint_group", 00:05:40.165 "trace_clear_tpoint_mask", 00:05:40.165 "trace_set_tpoint_mask", 00:05:40.165 "fsdev_set_opts", 00:05:40.165 "fsdev_get_opts", 00:05:40.165 "framework_get_pci_devices", 00:05:40.165 "framework_get_config", 00:05:40.165 "framework_get_subsystems", 00:05:40.165 "vfu_tgt_set_base_path", 00:05:40.165 "keyring_get_keys", 00:05:40.165 "iobuf_get_stats", 00:05:40.165 "iobuf_set_options", 00:05:40.165 "sock_get_default_impl", 00:05:40.165 "sock_set_default_impl", 00:05:40.165 "sock_impl_set_options", 00:05:40.165 "sock_impl_get_options", 00:05:40.165 "vmd_rescan", 00:05:40.165 "vmd_remove_device", 00:05:40.165 "vmd_enable", 00:05:40.165 "accel_get_stats", 00:05:40.165 "accel_set_options", 00:05:40.165 "accel_set_driver", 00:05:40.165 "accel_crypto_key_destroy", 00:05:40.165 "accel_crypto_keys_get", 00:05:40.165 "accel_crypto_key_create", 00:05:40.165 "accel_assign_opc", 00:05:40.165 "accel_get_module_info", 00:05:40.165 "accel_get_opc_assignments", 00:05:40.165 "bdev_get_histogram", 00:05:40.165 "bdev_enable_histogram", 00:05:40.165 "bdev_set_qos_limit", 00:05:40.165 "bdev_set_qd_sampling_period", 00:05:40.165 "bdev_get_bdevs", 00:05:40.165 "bdev_reset_iostat", 00:05:40.165 "bdev_get_iostat", 00:05:40.165 "bdev_examine", 00:05:40.165 "bdev_wait_for_examine", 00:05:40.165 "bdev_set_options", 00:05:40.165 "scsi_get_devices", 00:05:40.165 "thread_set_cpumask", 00:05:40.165 "scheduler_set_options", 00:05:40.165 "framework_get_governor", 00:05:40.165 "framework_get_scheduler", 00:05:40.165 "framework_set_scheduler", 00:05:40.165 "framework_get_reactors", 00:05:40.165 "thread_get_io_channels", 00:05:40.165 "thread_get_pollers", 00:05:40.165 "thread_get_stats", 00:05:40.165 "framework_monitor_context_switch", 00:05:40.165 "spdk_kill_instance", 00:05:40.165 "log_enable_timestamps", 00:05:40.165 "log_get_flags", 00:05:40.165 "log_clear_flag", 00:05:40.165 "log_set_flag", 00:05:40.165 "log_get_level", 00:05:40.165 "log_set_level", 00:05:40.165 "log_get_print_level", 00:05:40.165 "log_set_print_level", 00:05:40.165 "framework_enable_cpumask_locks", 00:05:40.165 "framework_disable_cpumask_locks", 00:05:40.165 "framework_wait_init", 00:05:40.165 "framework_start_init", 00:05:40.165 "virtio_blk_create_transport", 00:05:40.165 "virtio_blk_get_transports", 00:05:40.165 "vhost_controller_set_coalescing", 00:05:40.165 "vhost_get_controllers", 00:05:40.165 "vhost_delete_controller", 00:05:40.165 "vhost_create_blk_controller", 00:05:40.165 "vhost_scsi_controller_remove_target", 00:05:40.165 "vhost_scsi_controller_add_target", 00:05:40.165 "vhost_start_scsi_controller", 00:05:40.165 "vhost_create_scsi_controller", 00:05:40.165 "ublk_recover_disk", 00:05:40.165 "ublk_get_disks", 00:05:40.165 "ublk_stop_disk", 00:05:40.165 "ublk_start_disk", 00:05:40.165 "ublk_destroy_target", 00:05:40.165 "ublk_create_target", 00:05:40.165 "nbd_get_disks", 00:05:40.165 "nbd_stop_disk", 00:05:40.165 "nbd_start_disk", 00:05:40.165 "env_dpdk_get_mem_stats", 00:05:40.165 "nvmf_stop_mdns_prr", 00:05:40.165 "nvmf_publish_mdns_prr", 00:05:40.165 "nvmf_subsystem_get_listeners", 00:05:40.165 "nvmf_subsystem_get_qpairs", 00:05:40.165 "nvmf_subsystem_get_controllers", 00:05:40.165 "nvmf_get_stats", 00:05:40.165 "nvmf_get_transports", 00:05:40.165 "nvmf_create_transport", 00:05:40.165 "nvmf_get_targets", 00:05:40.165 "nvmf_delete_target", 00:05:40.165 "nvmf_create_target", 00:05:40.165 "nvmf_subsystem_allow_any_host", 00:05:40.165 "nvmf_subsystem_set_keys", 00:05:40.165 "nvmf_subsystem_remove_host", 00:05:40.165 "nvmf_subsystem_add_host", 00:05:40.165 "nvmf_ns_remove_host", 00:05:40.165 "nvmf_ns_add_host", 00:05:40.165 "nvmf_subsystem_remove_ns", 00:05:40.165 "nvmf_subsystem_set_ns_ana_group", 00:05:40.165 "nvmf_subsystem_add_ns", 00:05:40.165 "nvmf_subsystem_listener_set_ana_state", 00:05:40.165 "nvmf_discovery_get_referrals", 00:05:40.165 "nvmf_discovery_remove_referral", 00:05:40.165 "nvmf_discovery_add_referral", 00:05:40.165 "nvmf_subsystem_remove_listener", 00:05:40.165 "nvmf_subsystem_add_listener", 00:05:40.165 "nvmf_delete_subsystem", 00:05:40.165 "nvmf_create_subsystem", 00:05:40.165 "nvmf_get_subsystems", 00:05:40.165 "nvmf_set_crdt", 00:05:40.165 "nvmf_set_config", 00:05:40.165 "nvmf_set_max_subsystems", 00:05:40.165 "iscsi_get_histogram", 00:05:40.165 "iscsi_enable_histogram", 00:05:40.165 "iscsi_set_options", 00:05:40.165 "iscsi_get_auth_groups", 00:05:40.165 "iscsi_auth_group_remove_secret", 00:05:40.165 "iscsi_auth_group_add_secret", 00:05:40.165 "iscsi_delete_auth_group", 00:05:40.165 "iscsi_create_auth_group", 00:05:40.165 "iscsi_set_discovery_auth", 00:05:40.165 "iscsi_get_options", 00:05:40.165 "iscsi_target_node_request_logout", 00:05:40.165 "iscsi_target_node_set_redirect", 00:05:40.165 "iscsi_target_node_set_auth", 00:05:40.165 "iscsi_target_node_add_lun", 00:05:40.165 "iscsi_get_stats", 00:05:40.165 "iscsi_get_connections", 00:05:40.165 "iscsi_portal_group_set_auth", 00:05:40.165 "iscsi_start_portal_group", 00:05:40.165 "iscsi_delete_portal_group", 00:05:40.165 "iscsi_create_portal_group", 00:05:40.165 "iscsi_get_portal_groups", 00:05:40.165 "iscsi_delete_target_node", 00:05:40.165 "iscsi_target_node_remove_pg_ig_maps", 00:05:40.165 "iscsi_target_node_add_pg_ig_maps", 00:05:40.165 "iscsi_create_target_node", 00:05:40.165 "iscsi_get_target_nodes", 00:05:40.165 "iscsi_delete_initiator_group", 00:05:40.165 "iscsi_initiator_group_remove_initiators", 00:05:40.165 "iscsi_initiator_group_add_initiators", 00:05:40.165 "iscsi_create_initiator_group", 00:05:40.165 "iscsi_get_initiator_groups", 00:05:40.165 "fsdev_aio_delete", 00:05:40.165 "fsdev_aio_create", 00:05:40.165 "keyring_linux_set_options", 00:05:40.165 "keyring_file_remove_key", 00:05:40.165 "keyring_file_add_key", 00:05:40.165 "vfu_virtio_create_fs_endpoint", 00:05:40.165 "vfu_virtio_create_scsi_endpoint", 00:05:40.165 "vfu_virtio_scsi_remove_target", 00:05:40.165 "vfu_virtio_scsi_add_target", 00:05:40.165 "vfu_virtio_create_blk_endpoint", 00:05:40.165 "vfu_virtio_delete_endpoint", 00:05:40.165 "iaa_scan_accel_module", 00:05:40.165 "dsa_scan_accel_module", 00:05:40.165 "ioat_scan_accel_module", 00:05:40.165 "accel_error_inject_error", 00:05:40.165 "bdev_iscsi_delete", 00:05:40.166 "bdev_iscsi_create", 00:05:40.166 "bdev_iscsi_set_options", 00:05:40.166 "bdev_virtio_attach_controller", 00:05:40.166 "bdev_virtio_scsi_get_devices", 00:05:40.166 "bdev_virtio_detach_controller", 00:05:40.166 "bdev_virtio_blk_set_hotplug", 00:05:40.166 "bdev_ftl_set_property", 00:05:40.166 "bdev_ftl_get_properties", 00:05:40.166 "bdev_ftl_get_stats", 00:05:40.166 "bdev_ftl_unmap", 00:05:40.166 "bdev_ftl_unload", 00:05:40.166 "bdev_ftl_delete", 00:05:40.166 "bdev_ftl_load", 00:05:40.166 "bdev_ftl_create", 00:05:40.166 "bdev_aio_delete", 00:05:40.166 "bdev_aio_rescan", 00:05:40.166 "bdev_aio_create", 00:05:40.166 "blobfs_create", 00:05:40.166 "blobfs_detect", 00:05:40.166 "blobfs_set_cache_size", 00:05:40.166 "bdev_zone_block_delete", 00:05:40.166 "bdev_zone_block_create", 00:05:40.166 "bdev_delay_delete", 00:05:40.166 "bdev_delay_create", 00:05:40.166 "bdev_delay_update_latency", 00:05:40.166 "bdev_split_delete", 00:05:40.166 "bdev_split_create", 00:05:40.166 "bdev_error_inject_error", 00:05:40.166 "bdev_error_delete", 00:05:40.166 "bdev_error_create", 00:05:40.166 "bdev_raid_set_options", 00:05:40.166 "bdev_raid_remove_base_bdev", 00:05:40.166 "bdev_raid_add_base_bdev", 00:05:40.166 "bdev_raid_delete", 00:05:40.166 "bdev_raid_create", 00:05:40.166 "bdev_raid_get_bdevs", 00:05:40.166 "bdev_lvol_set_parent_bdev", 00:05:40.166 "bdev_lvol_set_parent", 00:05:40.166 "bdev_lvol_check_shallow_copy", 00:05:40.166 "bdev_lvol_start_shallow_copy", 00:05:40.166 "bdev_lvol_grow_lvstore", 00:05:40.166 "bdev_lvol_get_lvols", 00:05:40.166 "bdev_lvol_get_lvstores", 00:05:40.166 "bdev_lvol_delete", 00:05:40.166 "bdev_lvol_set_read_only", 00:05:40.166 "bdev_lvol_resize", 00:05:40.166 "bdev_lvol_decouple_parent", 00:05:40.166 "bdev_lvol_inflate", 00:05:40.166 "bdev_lvol_rename", 00:05:40.166 "bdev_lvol_clone_bdev", 00:05:40.166 "bdev_lvol_clone", 00:05:40.166 "bdev_lvol_snapshot", 00:05:40.166 "bdev_lvol_create", 00:05:40.166 "bdev_lvol_delete_lvstore", 00:05:40.166 "bdev_lvol_rename_lvstore", 00:05:40.166 "bdev_lvol_create_lvstore", 00:05:40.166 "bdev_passthru_delete", 00:05:40.166 "bdev_passthru_create", 00:05:40.166 "bdev_nvme_cuse_unregister", 00:05:40.166 "bdev_nvme_cuse_register", 00:05:40.166 "bdev_opal_new_user", 00:05:40.166 "bdev_opal_set_lock_state", 00:05:40.166 "bdev_opal_delete", 00:05:40.166 "bdev_opal_get_info", 00:05:40.166 "bdev_opal_create", 00:05:40.166 "bdev_nvme_opal_revert", 00:05:40.166 "bdev_nvme_opal_init", 00:05:40.166 "bdev_nvme_send_cmd", 00:05:40.166 "bdev_nvme_set_keys", 00:05:40.166 "bdev_nvme_get_path_iostat", 00:05:40.166 "bdev_nvme_get_mdns_discovery_info", 00:05:40.166 "bdev_nvme_stop_mdns_discovery", 00:05:40.166 "bdev_nvme_start_mdns_discovery", 00:05:40.166 "bdev_nvme_set_multipath_policy", 00:05:40.166 "bdev_nvme_set_preferred_path", 00:05:40.166 "bdev_nvme_get_io_paths", 00:05:40.166 "bdev_nvme_remove_error_injection", 00:05:40.166 "bdev_nvme_add_error_injection", 00:05:40.166 "bdev_nvme_get_discovery_info", 00:05:40.166 "bdev_nvme_stop_discovery", 00:05:40.166 "bdev_nvme_start_discovery", 00:05:40.166 "bdev_nvme_get_controller_health_info", 00:05:40.166 "bdev_nvme_disable_controller", 00:05:40.166 "bdev_nvme_enable_controller", 00:05:40.166 "bdev_nvme_reset_controller", 00:05:40.166 "bdev_nvme_get_transport_statistics", 00:05:40.166 "bdev_nvme_apply_firmware", 00:05:40.166 "bdev_nvme_detach_controller", 00:05:40.166 "bdev_nvme_get_controllers", 00:05:40.166 "bdev_nvme_attach_controller", 00:05:40.166 "bdev_nvme_set_hotplug", 00:05:40.166 "bdev_nvme_set_options", 00:05:40.166 "bdev_null_resize", 00:05:40.166 "bdev_null_delete", 00:05:40.166 "bdev_null_create", 00:05:40.166 "bdev_malloc_delete", 00:05:40.166 "bdev_malloc_create" 00:05:40.166 ] 00:05:40.425 07:01:44 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:40.425 07:01:44 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:40.425 07:01:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:40.425 07:01:44 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:40.425 07:01:44 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3764737 00:05:40.425 07:01:44 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 3764737 ']' 00:05:40.425 07:01:44 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 3764737 00:05:40.425 07:01:44 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:05:40.425 07:01:44 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:40.425 07:01:44 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3764737 00:05:40.425 07:01:44 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:40.425 07:01:44 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:40.425 07:01:44 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3764737' 00:05:40.425 killing process with pid 3764737 00:05:40.425 07:01:44 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 3764737 00:05:40.425 07:01:44 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 3764737 00:05:40.683 00:05:40.683 real 0m1.121s 00:05:40.683 user 0m1.874s 00:05:40.683 sys 0m0.472s 00:05:40.683 07:01:45 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:40.683 07:01:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:40.683 ************************************ 00:05:40.683 END TEST spdkcli_tcp 00:05:40.683 ************************************ 00:05:40.683 07:01:45 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:40.683 07:01:45 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:40.683 07:01:45 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:40.683 07:01:45 -- common/autotest_common.sh@10 -- # set +x 00:05:40.683 ************************************ 00:05:40.683 START TEST dpdk_mem_utility 00:05:40.684 ************************************ 00:05:40.684 07:01:45 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:40.942 * Looking for test storage... 00:05:40.942 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:05:40.942 07:01:45 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:40.942 07:01:45 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:05:40.942 07:01:45 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:40.942 07:01:45 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:40.942 07:01:45 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.942 07:01:45 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.942 07:01:45 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.942 07:01:45 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.942 07:01:45 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.942 07:01:45 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.942 07:01:45 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.942 07:01:45 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.942 07:01:45 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.942 07:01:45 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.942 07:01:45 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.942 07:01:45 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:40.942 07:01:45 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:40.942 07:01:45 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.942 07:01:45 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.942 07:01:45 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:40.942 07:01:45 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:40.942 07:01:45 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.942 07:01:45 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:40.942 07:01:45 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.942 07:01:45 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:40.942 07:01:45 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:40.942 07:01:45 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.942 07:01:45 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:40.943 07:01:45 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.943 07:01:45 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.943 07:01:45 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.943 07:01:45 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:40.943 07:01:45 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.943 07:01:45 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:40.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.943 --rc genhtml_branch_coverage=1 00:05:40.943 --rc genhtml_function_coverage=1 00:05:40.943 --rc genhtml_legend=1 00:05:40.943 --rc geninfo_all_blocks=1 00:05:40.943 --rc geninfo_unexecuted_blocks=1 00:05:40.943 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:40.943 ' 00:05:40.943 07:01:45 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:40.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.943 --rc genhtml_branch_coverage=1 00:05:40.943 --rc genhtml_function_coverage=1 00:05:40.943 --rc genhtml_legend=1 00:05:40.943 --rc geninfo_all_blocks=1 00:05:40.943 --rc geninfo_unexecuted_blocks=1 00:05:40.943 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:40.943 ' 00:05:40.943 07:01:45 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:40.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.943 --rc genhtml_branch_coverage=1 00:05:40.943 --rc genhtml_function_coverage=1 00:05:40.943 --rc genhtml_legend=1 00:05:40.943 --rc geninfo_all_blocks=1 00:05:40.943 --rc geninfo_unexecuted_blocks=1 00:05:40.943 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:40.943 ' 00:05:40.943 07:01:45 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:40.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.943 --rc genhtml_branch_coverage=1 00:05:40.943 --rc genhtml_function_coverage=1 00:05:40.943 --rc genhtml_legend=1 00:05:40.943 --rc geninfo_all_blocks=1 00:05:40.943 --rc geninfo_unexecuted_blocks=1 00:05:40.943 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:40.943 ' 00:05:40.943 07:01:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:40.943 07:01:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3765075 00:05:40.943 07:01:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3765075 00:05:40.943 07:01:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.943 07:01:45 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 3765075 ']' 00:05:40.943 07:01:45 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.943 07:01:45 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:40.943 07:01:45 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.943 07:01:45 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:40.943 07:01:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:40.943 [2024-11-20 07:01:45.430401] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:05:40.943 [2024-11-20 07:01:45.430485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3765075 ] 00:05:41.202 [2024-11-20 07:01:45.501816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.202 [2024-11-20 07:01:45.541206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.202 07:01:45 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:41.202 07:01:45 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:05:41.202 07:01:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:41.202 07:01:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:41.202 07:01:45 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.202 07:01:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:41.460 { 00:05:41.460 "filename": "/tmp/spdk_mem_dump.txt" 00:05:41.460 } 00:05:41.460 07:01:45 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.460 07:01:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:41.460 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:41.460 1 heaps totaling size 818.000000 MiB 00:05:41.460 size: 818.000000 MiB heap id: 0 00:05:41.460 end heaps---------- 00:05:41.460 9 mempools totaling size 603.782043 MiB 00:05:41.460 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:41.460 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:41.460 size: 100.555481 MiB name: bdev_io_3765075 00:05:41.460 size: 50.003479 MiB name: msgpool_3765075 00:05:41.460 size: 36.509338 MiB name: fsdev_io_3765075 00:05:41.460 size: 21.763794 MiB name: PDU_Pool 00:05:41.460 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:41.460 size: 4.133484 MiB name: evtpool_3765075 00:05:41.460 size: 0.026123 MiB name: Session_Pool 00:05:41.461 end mempools------- 00:05:41.461 6 memzones totaling size 4.142822 MiB 00:05:41.461 size: 1.000366 MiB name: RG_ring_0_3765075 00:05:41.461 size: 1.000366 MiB name: RG_ring_1_3765075 00:05:41.461 size: 1.000366 MiB name: RG_ring_4_3765075 00:05:41.461 size: 1.000366 MiB name: RG_ring_5_3765075 00:05:41.461 size: 0.125366 MiB name: RG_ring_2_3765075 00:05:41.461 size: 0.015991 MiB name: RG_ring_3_3765075 00:05:41.461 end memzones------- 00:05:41.461 07:01:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:41.461 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:41.461 list of free elements. size: 10.852478 MiB 00:05:41.461 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:41.461 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:41.461 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:41.461 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:41.461 element at address: 0x200008000000 with size: 0.959839 MiB 00:05:41.461 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:41.461 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:41.461 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:41.461 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:41.461 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:41.461 element at address: 0x200003e00000 with size: 0.490723 MiB 00:05:41.461 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:41.461 element at address: 0x200010600000 with size: 0.481934 MiB 00:05:41.461 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:41.461 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:41.461 list of standard malloc elements. size: 199.218628 MiB 00:05:41.461 element at address: 0x2000081fff80 with size: 132.000122 MiB 00:05:41.461 element at address: 0x200003ffff80 with size: 64.000122 MiB 00:05:41.461 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:41.461 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:41.461 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:41.461 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:41.461 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:41.461 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:41.461 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:41.461 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:41.461 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:41.461 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:41.461 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:41.461 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:41.461 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:41.461 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:41.461 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:41.461 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:41.461 element at address: 0x20000085b100 with size: 0.000183 MiB 00:05:41.461 element at address: 0x2000008db3c0 with size: 0.000183 MiB 00:05:41.461 element at address: 0x2000008db5c0 with size: 0.000183 MiB 00:05:41.461 element at address: 0x2000008df880 with size: 0.000183 MiB 00:05:41.461 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:41.461 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:41.461 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:41.461 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:41.461 element at address: 0x200003e7da00 with size: 0.000183 MiB 00:05:41.461 element at address: 0x200003e7dac0 with size: 0.000183 MiB 00:05:41.461 element at address: 0x200003efdd80 with size: 0.000183 MiB 00:05:41.461 element at address: 0x2000080fdd80 with size: 0.000183 MiB 00:05:41.461 element at address: 0x20001067b600 with size: 0.000183 MiB 00:05:41.461 element at address: 0x20001067b6c0 with size: 0.000183 MiB 00:05:41.461 element at address: 0x2000106fb980 with size: 0.000183 MiB 00:05:41.461 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:41.461 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:41.461 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:41.461 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:41.461 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:41.461 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:41.461 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:41.461 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:41.461 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:41.461 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:41.461 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:41.461 list of memzone associated elements. size: 607.928894 MiB 00:05:41.461 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:41.461 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:41.461 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:41.461 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:41.461 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:41.461 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_3765075_0 00:05:41.461 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:41.461 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3765075_0 00:05:41.461 element at address: 0x2000107fdb80 with size: 36.008911 MiB 00:05:41.461 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3765075_0 00:05:41.461 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:41.461 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:41.461 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:41.461 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:41.461 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:41.461 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3765075_0 00:05:41.461 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:41.461 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3765075 00:05:41.461 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:41.461 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3765075 00:05:41.461 element at address: 0x2000106fba40 with size: 1.008118 MiB 00:05:41.461 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:41.461 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:41.461 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:41.461 element at address: 0x2000080fde40 with size: 1.008118 MiB 00:05:41.461 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:41.461 element at address: 0x200003efde40 with size: 1.008118 MiB 00:05:41.461 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:41.461 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:41.461 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3765075 00:05:41.461 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:41.461 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3765075 00:05:41.461 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:41.461 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3765075 00:05:41.461 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:41.461 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3765075 00:05:41.461 element at address: 0x20000085b1c0 with size: 0.500488 MiB 00:05:41.461 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3765075 00:05:41.461 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:41.461 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3765075 00:05:41.461 element at address: 0x20001067b780 with size: 0.500488 MiB 00:05:41.461 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:41.461 element at address: 0x200003e7db80 with size: 0.500488 MiB 00:05:41.461 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:41.461 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:41.461 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:41.461 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:41.461 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3765075 00:05:41.461 element at address: 0x2000008df940 with size: 0.125488 MiB 00:05:41.461 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3765075 00:05:41.461 element at address: 0x2000080f5b80 with size: 0.031738 MiB 00:05:41.461 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:41.461 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:41.461 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:41.461 element at address: 0x2000008db680 with size: 0.016113 MiB 00:05:41.461 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3765075 00:05:41.461 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:41.461 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:41.461 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:41.461 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3765075 00:05:41.461 element at address: 0x2000008db480 with size: 0.000305 MiB 00:05:41.461 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3765075 00:05:41.461 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:41.461 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3765075 00:05:41.461 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:41.461 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:41.461 07:01:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:41.461 07:01:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3765075 00:05:41.461 07:01:45 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 3765075 ']' 00:05:41.461 07:01:45 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 3765075 00:05:41.461 07:01:45 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:05:41.461 07:01:45 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:41.461 07:01:45 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3765075 00:05:41.462 07:01:45 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:41.462 07:01:45 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:41.462 07:01:45 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3765075' 00:05:41.462 killing process with pid 3765075 00:05:41.462 07:01:45 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 3765075 00:05:41.462 07:01:45 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 3765075 00:05:41.720 00:05:41.720 real 0m1.006s 00:05:41.720 user 0m0.939s 00:05:41.720 sys 0m0.436s 00:05:41.720 07:01:46 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:41.720 07:01:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:41.720 ************************************ 00:05:41.720 END TEST dpdk_mem_utility 00:05:41.720 ************************************ 00:05:41.720 07:01:46 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:41.720 07:01:46 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:41.720 07:01:46 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:41.720 07:01:46 -- common/autotest_common.sh@10 -- # set +x 00:05:41.978 ************************************ 00:05:41.978 START TEST event 00:05:41.978 ************************************ 00:05:41.978 07:01:46 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:41.978 * Looking for test storage... 00:05:41.978 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:41.978 07:01:46 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:41.978 07:01:46 event -- common/autotest_common.sh@1691 -- # lcov --version 00:05:41.978 07:01:46 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:41.978 07:01:46 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:41.978 07:01:46 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.978 07:01:46 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.978 07:01:46 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.978 07:01:46 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.978 07:01:46 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.978 07:01:46 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.978 07:01:46 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.978 07:01:46 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.978 07:01:46 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.978 07:01:46 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.978 07:01:46 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.978 07:01:46 event -- scripts/common.sh@344 -- # case "$op" in 00:05:41.978 07:01:46 event -- scripts/common.sh@345 -- # : 1 00:05:41.978 07:01:46 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.978 07:01:46 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.978 07:01:46 event -- scripts/common.sh@365 -- # decimal 1 00:05:41.978 07:01:46 event -- scripts/common.sh@353 -- # local d=1 00:05:41.978 07:01:46 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.978 07:01:46 event -- scripts/common.sh@355 -- # echo 1 00:05:41.978 07:01:46 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.978 07:01:46 event -- scripts/common.sh@366 -- # decimal 2 00:05:41.978 07:01:46 event -- scripts/common.sh@353 -- # local d=2 00:05:41.978 07:01:46 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.978 07:01:46 event -- scripts/common.sh@355 -- # echo 2 00:05:41.978 07:01:46 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.978 07:01:46 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.978 07:01:46 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.978 07:01:46 event -- scripts/common.sh@368 -- # return 0 00:05:41.978 07:01:46 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.978 07:01:46 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:41.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.978 --rc genhtml_branch_coverage=1 00:05:41.978 --rc genhtml_function_coverage=1 00:05:41.978 --rc genhtml_legend=1 00:05:41.978 --rc geninfo_all_blocks=1 00:05:41.978 --rc geninfo_unexecuted_blocks=1 00:05:41.978 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:41.978 ' 00:05:41.978 07:01:46 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:41.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.978 --rc genhtml_branch_coverage=1 00:05:41.978 --rc genhtml_function_coverage=1 00:05:41.978 --rc genhtml_legend=1 00:05:41.978 --rc geninfo_all_blocks=1 00:05:41.978 --rc geninfo_unexecuted_blocks=1 00:05:41.979 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:41.979 ' 00:05:41.979 07:01:46 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:41.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.979 --rc genhtml_branch_coverage=1 00:05:41.979 --rc genhtml_function_coverage=1 00:05:41.979 --rc genhtml_legend=1 00:05:41.979 --rc geninfo_all_blocks=1 00:05:41.979 --rc geninfo_unexecuted_blocks=1 00:05:41.979 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:41.979 ' 00:05:41.979 07:01:46 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:41.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.979 --rc genhtml_branch_coverage=1 00:05:41.979 --rc genhtml_function_coverage=1 00:05:41.979 --rc genhtml_legend=1 00:05:41.979 --rc geninfo_all_blocks=1 00:05:41.979 --rc geninfo_unexecuted_blocks=1 00:05:41.979 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:41.979 ' 00:05:41.979 07:01:46 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:41.979 07:01:46 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:41.979 07:01:46 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:41.979 07:01:46 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:05:41.979 07:01:46 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:41.979 07:01:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.239 ************************************ 00:05:42.239 START TEST event_perf 00:05:42.239 ************************************ 00:05:42.239 07:01:46 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:42.239 Running I/O for 1 seconds...[2024-11-20 07:01:46.557085] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:05:42.239 [2024-11-20 07:01:46.557181] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3765202 ] 00:05:42.239 [2024-11-20 07:01:46.631558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:42.239 [2024-11-20 07:01:46.675388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.239 [2024-11-20 07:01:46.675484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.239 [2024-11-20 07:01:46.675543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:42.239 [2024-11-20 07:01:46.675545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.175 Running I/O for 1 seconds... 00:05:43.175 lcore 0: 191260 00:05:43.175 lcore 1: 191258 00:05:43.175 lcore 2: 191258 00:05:43.175 lcore 3: 191258 00:05:43.175 done. 00:05:43.175 00:05:43.175 real 0m1.174s 00:05:43.175 user 0m4.080s 00:05:43.175 sys 0m0.090s 00:05:43.175 07:01:47 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:43.175 07:01:47 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:43.175 ************************************ 00:05:43.175 END TEST event_perf 00:05:43.175 ************************************ 00:05:43.434 07:01:47 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:43.434 07:01:47 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:43.434 07:01:47 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:43.434 07:01:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.434 ************************************ 00:05:43.434 START TEST event_reactor 00:05:43.434 ************************************ 00:05:43.434 07:01:47 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:43.434 [2024-11-20 07:01:47.801264] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:05:43.434 [2024-11-20 07:01:47.801322] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3765445 ] 00:05:43.434 [2024-11-20 07:01:47.870752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.434 [2024-11-20 07:01:47.909713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.813 test_start 00:05:44.813 oneshot 00:05:44.813 tick 100 00:05:44.813 tick 100 00:05:44.813 tick 250 00:05:44.813 tick 100 00:05:44.813 tick 100 00:05:44.813 tick 100 00:05:44.813 tick 250 00:05:44.813 tick 500 00:05:44.813 tick 100 00:05:44.813 tick 100 00:05:44.813 tick 250 00:05:44.813 tick 100 00:05:44.813 tick 100 00:05:44.813 test_end 00:05:44.813 00:05:44.813 real 0m1.158s 00:05:44.813 user 0m1.079s 00:05:44.813 sys 0m0.075s 00:05:44.813 07:01:48 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:44.813 07:01:48 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:44.813 ************************************ 00:05:44.813 END TEST event_reactor 00:05:44.813 ************************************ 00:05:44.813 07:01:48 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:44.813 07:01:48 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:44.813 07:01:48 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:44.813 07:01:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.813 ************************************ 00:05:44.813 START TEST event_reactor_perf 00:05:44.813 ************************************ 00:05:44.813 07:01:49 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:44.813 [2024-11-20 07:01:49.036183] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:05:44.813 [2024-11-20 07:01:49.036264] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3765727 ] 00:05:44.813 [2024-11-20 07:01:49.109216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.813 [2024-11-20 07:01:49.147873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.748 test_start 00:05:45.748 test_end 00:05:45.748 Performance: 974829 events per second 00:05:45.748 00:05:45.748 real 0m1.168s 00:05:45.748 user 0m1.089s 00:05:45.748 sys 0m0.075s 00:05:45.748 07:01:50 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:45.748 07:01:50 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:45.748 ************************************ 00:05:45.748 END TEST event_reactor_perf 00:05:45.748 ************************************ 00:05:45.748 07:01:50 event -- event/event.sh@49 -- # uname -s 00:05:45.748 07:01:50 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:45.748 07:01:50 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:45.748 07:01:50 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:45.748 07:01:50 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:45.748 07:01:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.748 ************************************ 00:05:45.748 START TEST event_scheduler 00:05:45.748 ************************************ 00:05:45.748 07:01:50 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:46.015 * Looking for test storage... 00:05:46.015 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:05:46.015 07:01:50 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:46.015 07:01:50 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:46.015 07:01:50 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:46.015 07:01:50 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:46.015 07:01:50 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.015 07:01:50 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.015 07:01:50 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.015 07:01:50 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.015 07:01:50 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.015 07:01:50 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.015 07:01:50 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.015 07:01:50 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.015 07:01:50 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.015 07:01:50 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.015 07:01:50 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.015 07:01:50 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:46.015 07:01:50 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:46.015 07:01:50 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.015 07:01:50 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.015 07:01:50 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:46.015 07:01:50 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:46.015 07:01:50 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.015 07:01:50 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:46.015 07:01:50 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.015 07:01:50 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:46.015 07:01:50 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:46.015 07:01:50 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.015 07:01:50 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:46.015 07:01:50 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.015 07:01:50 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.015 07:01:50 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.015 07:01:50 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:46.015 07:01:50 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.015 07:01:50 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:46.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.015 --rc genhtml_branch_coverage=1 00:05:46.015 --rc genhtml_function_coverage=1 00:05:46.015 --rc genhtml_legend=1 00:05:46.015 --rc geninfo_all_blocks=1 00:05:46.015 --rc geninfo_unexecuted_blocks=1 00:05:46.015 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:46.015 ' 00:05:46.015 07:01:50 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:46.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.015 --rc genhtml_branch_coverage=1 00:05:46.015 --rc genhtml_function_coverage=1 00:05:46.015 --rc genhtml_legend=1 00:05:46.015 --rc geninfo_all_blocks=1 00:05:46.015 --rc geninfo_unexecuted_blocks=1 00:05:46.015 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:46.015 ' 00:05:46.015 07:01:50 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:46.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.015 --rc genhtml_branch_coverage=1 00:05:46.015 --rc genhtml_function_coverage=1 00:05:46.015 --rc genhtml_legend=1 00:05:46.015 --rc geninfo_all_blocks=1 00:05:46.015 --rc geninfo_unexecuted_blocks=1 00:05:46.015 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:46.015 ' 00:05:46.015 07:01:50 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:46.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.015 --rc genhtml_branch_coverage=1 00:05:46.015 --rc genhtml_function_coverage=1 00:05:46.015 --rc genhtml_legend=1 00:05:46.015 --rc geninfo_all_blocks=1 00:05:46.015 --rc geninfo_unexecuted_blocks=1 00:05:46.015 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:46.015 ' 00:05:46.015 07:01:50 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:46.015 07:01:50 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3766041 00:05:46.015 07:01:50 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.015 07:01:50 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:46.015 07:01:50 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3766041 00:05:46.015 07:01:50 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 3766041 ']' 00:05:46.015 07:01:50 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.015 07:01:50 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:46.015 07:01:50 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.015 07:01:50 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:46.015 07:01:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:46.016 [2024-11-20 07:01:50.481948] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:05:46.016 [2024-11-20 07:01:50.482016] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3766041 ] 00:05:46.016 [2024-11-20 07:01:50.550301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:46.274 [2024-11-20 07:01:50.597808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.274 [2024-11-20 07:01:50.597896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.274 [2024-11-20 07:01:50.597980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:46.275 [2024-11-20 07:01:50.597982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.275 07:01:50 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:46.275 07:01:50 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:05:46.275 07:01:50 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:46.275 07:01:50 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.275 07:01:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:46.275 [2024-11-20 07:01:50.658595] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:46.275 [2024-11-20 07:01:50.658620] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:46.275 [2024-11-20 07:01:50.658633] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:46.275 [2024-11-20 07:01:50.658641] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:46.275 [2024-11-20 07:01:50.658648] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:46.275 07:01:50 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.275 07:01:50 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:46.275 07:01:50 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.275 07:01:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:46.275 [2024-11-20 07:01:50.733560] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:46.275 07:01:50 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.275 07:01:50 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:46.275 07:01:50 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:46.275 07:01:50 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:46.275 07:01:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:46.275 ************************************ 00:05:46.275 START TEST scheduler_create_thread 00:05:46.275 ************************************ 00:05:46.275 07:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:05:46.275 07:01:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:46.275 07:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.275 07:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.275 2 00:05:46.275 07:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.275 07:01:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:46.275 07:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.275 07:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.275 3 00:05:46.275 07:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.275 07:01:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:46.275 07:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.275 07:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.275 4 00:05:46.275 07:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.275 07:01:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:46.275 07:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.275 07:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.275 5 00:05:46.275 07:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.275 07:01:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:46.275 07:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.275 07:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.534 6 00:05:46.534 07:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.534 07:01:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:46.534 07:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.534 07:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.534 7 00:05:46.534 07:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.534 07:01:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:46.534 07:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.534 07:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.534 8 00:05:46.534 07:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.534 07:01:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:46.534 07:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.534 07:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.534 9 00:05:46.534 07:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.534 07:01:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:46.534 07:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.534 07:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.534 10 00:05:46.534 07:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.534 07:01:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:46.534 07:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.534 07:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.534 07:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.534 07:01:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:46.534 07:01:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:46.534 07:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.534 07:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.469 07:01:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.469 07:01:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:47.469 07:01:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.469 07:01:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.843 07:01:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.843 07:01:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:48.844 07:01:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:48.844 07:01:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.844 07:01:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.778 07:01:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.778 00:05:49.779 real 0m3.382s 00:05:49.779 user 0m0.025s 00:05:49.779 sys 0m0.006s 00:05:49.779 07:01:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:49.779 07:01:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.779 ************************************ 00:05:49.779 END TEST scheduler_create_thread 00:05:49.779 ************************************ 00:05:49.779 07:01:54 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:49.779 07:01:54 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3766041 00:05:49.779 07:01:54 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 3766041 ']' 00:05:49.779 07:01:54 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 3766041 00:05:49.779 07:01:54 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:05:49.779 07:01:54 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:49.779 07:01:54 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3766041 00:05:49.779 07:01:54 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:49.779 07:01:54 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:49.779 07:01:54 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3766041' 00:05:49.779 killing process with pid 3766041 00:05:49.779 07:01:54 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 3766041 00:05:49.779 07:01:54 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 3766041 00:05:50.037 [2024-11-20 07:01:54.537526] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:50.296 00:05:50.296 real 0m4.472s 00:05:50.296 user 0m7.814s 00:05:50.296 sys 0m0.443s 00:05:50.296 07:01:54 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:50.296 07:01:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:50.296 ************************************ 00:05:50.296 END TEST event_scheduler 00:05:50.296 ************************************ 00:05:50.296 07:01:54 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:50.296 07:01:54 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:50.296 07:01:54 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:50.296 07:01:54 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:50.296 07:01:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.296 ************************************ 00:05:50.296 START TEST app_repeat 00:05:50.296 ************************************ 00:05:50.296 07:01:54 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:05:50.296 07:01:54 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.296 07:01:54 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.296 07:01:54 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:50.296 07:01:54 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.296 07:01:54 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:50.296 07:01:54 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:50.296 07:01:54 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:50.296 07:01:54 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3766896 00:05:50.296 07:01:54 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.296 07:01:54 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:50.296 07:01:54 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3766896' 00:05:50.296 Process app_repeat pid: 3766896 00:05:50.296 07:01:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:50.296 07:01:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:50.296 spdk_app_start Round 0 00:05:50.296 07:01:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3766896 /var/tmp/spdk-nbd.sock 00:05:50.296 07:01:54 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3766896 ']' 00:05:50.296 07:01:54 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.296 07:01:54 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:50.296 07:01:54 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.297 07:01:54 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:50.297 07:01:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.297 [2024-11-20 07:01:54.851038] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:05:50.297 [2024-11-20 07:01:54.851119] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3766896 ] 00:05:50.556 [2024-11-20 07:01:54.925167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.556 [2024-11-20 07:01:54.968234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.556 [2024-11-20 07:01:54.968238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.556 07:01:55 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:50.556 07:01:55 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:50.556 07:01:55 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.814 Malloc0 00:05:50.814 07:01:55 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.073 Malloc1 00:05:51.073 07:01:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.073 07:01:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.073 07:01:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.073 07:01:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:51.073 07:01:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.073 07:01:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:51.073 07:01:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.073 07:01:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.073 07:01:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.073 07:01:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:51.073 07:01:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.073 07:01:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:51.073 07:01:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:51.073 07:01:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:51.073 07:01:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.073 07:01:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:51.331 /dev/nbd0 00:05:51.331 07:01:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:51.331 07:01:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:51.331 07:01:55 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:51.331 07:01:55 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:51.331 07:01:55 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:51.331 07:01:55 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:51.331 07:01:55 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:51.331 07:01:55 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:51.331 07:01:55 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:51.331 07:01:55 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:51.331 07:01:55 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.331 1+0 records in 00:05:51.331 1+0 records out 00:05:51.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255541 s, 16.0 MB/s 00:05:51.331 07:01:55 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:51.331 07:01:55 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:51.331 07:01:55 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:51.331 07:01:55 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:51.331 07:01:55 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:51.331 07:01:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.331 07:01:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.331 07:01:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:51.590 /dev/nbd1 00:05:51.590 07:01:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:51.590 07:01:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:51.590 07:01:55 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:51.590 07:01:55 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:51.590 07:01:55 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:51.590 07:01:55 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:51.590 07:01:55 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:51.590 07:01:55 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:51.590 07:01:55 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:51.590 07:01:55 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:51.590 07:01:55 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.590 1+0 records in 00:05:51.590 1+0 records out 00:05:51.590 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000155565 s, 26.3 MB/s 00:05:51.590 07:01:55 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:51.590 07:01:55 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:51.590 07:01:55 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:51.590 07:01:55 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:51.590 07:01:55 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:51.590 07:01:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.590 07:01:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.590 07:01:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.590 07:01:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.590 07:01:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:51.850 { 00:05:51.850 "nbd_device": "/dev/nbd0", 00:05:51.850 "bdev_name": "Malloc0" 00:05:51.850 }, 00:05:51.850 { 00:05:51.850 "nbd_device": "/dev/nbd1", 00:05:51.850 "bdev_name": "Malloc1" 00:05:51.850 } 00:05:51.850 ]' 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:51.850 { 00:05:51.850 "nbd_device": "/dev/nbd0", 00:05:51.850 "bdev_name": "Malloc0" 00:05:51.850 }, 00:05:51.850 { 00:05:51.850 "nbd_device": "/dev/nbd1", 00:05:51.850 "bdev_name": "Malloc1" 00:05:51.850 } 00:05:51.850 ]' 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:51.850 /dev/nbd1' 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:51.850 /dev/nbd1' 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:51.850 256+0 records in 00:05:51.850 256+0 records out 00:05:51.850 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00445223 s, 236 MB/s 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:51.850 256+0 records in 00:05:51.850 256+0 records out 00:05:51.850 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197901 s, 53.0 MB/s 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:51.850 256+0 records in 00:05:51.850 256+0 records out 00:05:51.850 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212963 s, 49.2 MB/s 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.850 07:01:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:52.109 07:01:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:52.109 07:01:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:52.109 07:01:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:52.109 07:01:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.109 07:01:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.109 07:01:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:52.109 07:01:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.109 07:01:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.109 07:01:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.109 07:01:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:52.367 07:01:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:52.367 07:01:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:52.367 07:01:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:52.367 07:01:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.367 07:01:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.367 07:01:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:52.367 07:01:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.367 07:01:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.367 07:01:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.367 07:01:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.367 07:01:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.367 07:01:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:52.367 07:01:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.367 07:01:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:52.626 07:01:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:52.626 07:01:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:52.626 07:01:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.626 07:01:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:52.626 07:01:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:52.626 07:01:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:52.626 07:01:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:52.626 07:01:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:52.626 07:01:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:52.626 07:01:56 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:52.626 07:01:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:52.885 [2024-11-20 07:01:57.300590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.885 [2024-11-20 07:01:57.337172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.885 [2024-11-20 07:01:57.337175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.885 [2024-11-20 07:01:57.377614] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:52.885 [2024-11-20 07:01:57.377656] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:56.167 07:02:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:56.167 07:02:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:56.167 spdk_app_start Round 1 00:05:56.167 07:02:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3766896 /var/tmp/spdk-nbd.sock 00:05:56.167 07:02:00 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3766896 ']' 00:05:56.167 07:02:00 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:56.167 07:02:00 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:56.167 07:02:00 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:56.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:56.167 07:02:00 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:56.167 07:02:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:56.167 07:02:00 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:56.167 07:02:00 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:56.167 07:02:00 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.167 Malloc0 00:05:56.167 07:02:00 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.426 Malloc1 00:05:56.426 07:02:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.426 07:02:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.426 07:02:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.426 07:02:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:56.426 07:02:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.426 07:02:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:56.426 07:02:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.426 07:02:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.426 07:02:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.426 07:02:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:56.426 07:02:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.426 07:02:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:56.426 07:02:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:56.426 07:02:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:56.426 07:02:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.426 07:02:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:56.426 /dev/nbd0 00:05:56.426 07:02:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:56.426 07:02:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:56.426 07:02:00 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:56.426 07:02:00 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:56.426 07:02:00 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:56.426 07:02:00 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:56.426 07:02:00 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:56.426 07:02:00 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:56.426 07:02:00 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:56.426 07:02:00 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:56.426 07:02:00 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.426 1+0 records in 00:05:56.426 1+0 records out 00:05:56.426 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002191 s, 18.7 MB/s 00:05:56.426 07:02:00 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:56.426 07:02:00 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:56.426 07:02:00 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:56.426 07:02:00 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:56.426 07:02:00 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:56.426 07:02:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.426 07:02:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.426 07:02:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:56.684 /dev/nbd1 00:05:56.684 07:02:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:56.684 07:02:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:56.684 07:02:01 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:56.684 07:02:01 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:56.684 07:02:01 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:56.684 07:02:01 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:56.684 07:02:01 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:56.684 07:02:01 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:56.684 07:02:01 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:56.684 07:02:01 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:56.684 07:02:01 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.684 1+0 records in 00:05:56.684 1+0 records out 00:05:56.684 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235457 s, 17.4 MB/s 00:05:56.684 07:02:01 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:56.684 07:02:01 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:56.684 07:02:01 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:56.684 07:02:01 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:56.684 07:02:01 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:56.684 07:02:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.684 07:02:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.684 07:02:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.684 07:02:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.684 07:02:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.943 07:02:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:56.943 { 00:05:56.943 "nbd_device": "/dev/nbd0", 00:05:56.943 "bdev_name": "Malloc0" 00:05:56.943 }, 00:05:56.943 { 00:05:56.943 "nbd_device": "/dev/nbd1", 00:05:56.943 "bdev_name": "Malloc1" 00:05:56.943 } 00:05:56.943 ]' 00:05:56.943 07:02:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:56.943 { 00:05:56.943 "nbd_device": "/dev/nbd0", 00:05:56.943 "bdev_name": "Malloc0" 00:05:56.943 }, 00:05:56.943 { 00:05:56.943 "nbd_device": "/dev/nbd1", 00:05:56.943 "bdev_name": "Malloc1" 00:05:56.943 } 00:05:56.943 ]' 00:05:56.943 07:02:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.943 07:02:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:56.943 /dev/nbd1' 00:05:56.943 07:02:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:56.943 /dev/nbd1' 00:05:56.943 07:02:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.943 07:02:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:56.943 07:02:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:56.943 07:02:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:56.943 07:02:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:56.943 07:02:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:56.943 07:02:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.943 07:02:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.943 07:02:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:56.943 07:02:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.943 07:02:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:56.943 07:02:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:56.943 256+0 records in 00:05:56.943 256+0 records out 00:05:56.943 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104559 s, 100 MB/s 00:05:56.943 07:02:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.943 07:02:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:56.943 256+0 records in 00:05:56.943 256+0 records out 00:05:56.943 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198973 s, 52.7 MB/s 00:05:56.943 07:02:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.943 07:02:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:57.201 256+0 records in 00:05:57.201 256+0 records out 00:05:57.201 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211809 s, 49.5 MB/s 00:05:57.201 07:02:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:57.201 07:02:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.201 07:02:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.201 07:02:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:57.201 07:02:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:57.201 07:02:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:57.201 07:02:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:57.201 07:02:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.201 07:02:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:57.201 07:02:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.202 07:02:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:57.202 07:02:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:57.202 07:02:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:57.202 07:02:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.202 07:02:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.202 07:02:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:57.202 07:02:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:57.202 07:02:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.202 07:02:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:57.202 07:02:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:57.202 07:02:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:57.202 07:02:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:57.202 07:02:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.202 07:02:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.202 07:02:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:57.202 07:02:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:57.202 07:02:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.202 07:02:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.202 07:02:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:57.460 07:02:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:57.460 07:02:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:57.460 07:02:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:57.460 07:02:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.460 07:02:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.460 07:02:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:57.460 07:02:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:57.460 07:02:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.460 07:02:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.460 07:02:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.460 07:02:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.719 07:02:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:57.719 07:02:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:57.719 07:02:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.719 07:02:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:57.719 07:02:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:57.719 07:02:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.719 07:02:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:57.719 07:02:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:57.719 07:02:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:57.719 07:02:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:57.719 07:02:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:57.719 07:02:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:57.719 07:02:02 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:57.978 07:02:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:58.245 [2024-11-20 07:02:02.568595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:58.245 [2024-11-20 07:02:02.605244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.245 [2024-11-20 07:02:02.605247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.245 [2024-11-20 07:02:02.646486] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:58.245 [2024-11-20 07:02:02.646530] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:00.892 07:02:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:00.892 07:02:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:00.892 spdk_app_start Round 2 00:06:00.892 07:02:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3766896 /var/tmp/spdk-nbd.sock 00:06:00.892 07:02:05 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3766896 ']' 00:06:00.892 07:02:05 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.892 07:02:05 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:00.892 07:02:05 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.892 07:02:05 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:00.892 07:02:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:01.150 07:02:05 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:01.150 07:02:05 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:01.150 07:02:05 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.408 Malloc0 00:06:01.408 07:02:05 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.667 Malloc1 00:06:01.667 07:02:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.667 07:02:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.667 07:02:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.667 07:02:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:01.667 07:02:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.667 07:02:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:01.667 07:02:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.667 07:02:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.667 07:02:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.667 07:02:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:01.667 07:02:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.667 07:02:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:01.668 07:02:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:01.668 07:02:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:01.668 07:02:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.668 07:02:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:01.668 /dev/nbd0 00:06:01.668 07:02:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:01.668 07:02:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:01.668 07:02:06 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:01.668 07:02:06 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:01.668 07:02:06 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:01.668 07:02:06 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:01.668 07:02:06 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:01.668 07:02:06 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:01.668 07:02:06 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:01.668 07:02:06 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:01.668 07:02:06 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.668 1+0 records in 00:06:01.668 1+0 records out 00:06:01.668 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225311 s, 18.2 MB/s 00:06:01.668 07:02:06 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:01.926 07:02:06 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:01.926 07:02:06 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:01.926 07:02:06 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:01.926 07:02:06 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:01.926 07:02:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.926 07:02:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.926 07:02:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:01.926 /dev/nbd1 00:06:01.926 07:02:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:01.926 07:02:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:01.926 07:02:06 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:01.926 07:02:06 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:01.926 07:02:06 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:01.926 07:02:06 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:01.926 07:02:06 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:01.926 07:02:06 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:01.926 07:02:06 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:01.926 07:02:06 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:01.926 07:02:06 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.926 1+0 records in 00:06:01.926 1+0 records out 00:06:01.926 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254731 s, 16.1 MB/s 00:06:01.926 07:02:06 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:01.926 07:02:06 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:01.926 07:02:06 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:01.926 07:02:06 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:01.926 07:02:06 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:01.926 07:02:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.926 07:02:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.926 07:02:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.926 07:02:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.926 07:02:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.185 07:02:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:02.185 { 00:06:02.185 "nbd_device": "/dev/nbd0", 00:06:02.185 "bdev_name": "Malloc0" 00:06:02.185 }, 00:06:02.185 { 00:06:02.185 "nbd_device": "/dev/nbd1", 00:06:02.185 "bdev_name": "Malloc1" 00:06:02.185 } 00:06:02.185 ]' 00:06:02.185 07:02:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:02.185 { 00:06:02.185 "nbd_device": "/dev/nbd0", 00:06:02.185 "bdev_name": "Malloc0" 00:06:02.185 }, 00:06:02.185 { 00:06:02.185 "nbd_device": "/dev/nbd1", 00:06:02.185 "bdev_name": "Malloc1" 00:06:02.185 } 00:06:02.185 ]' 00:06:02.185 07:02:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.185 07:02:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:02.185 /dev/nbd1' 00:06:02.185 07:02:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:02.185 /dev/nbd1' 00:06:02.185 07:02:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.185 07:02:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:02.185 07:02:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:02.185 07:02:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:02.185 07:02:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:02.185 07:02:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:02.185 07:02:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.185 07:02:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.185 07:02:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:02.185 07:02:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.185 07:02:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:02.185 07:02:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:02.185 256+0 records in 00:06:02.185 256+0 records out 00:06:02.185 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110614 s, 94.8 MB/s 00:06:02.185 07:02:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.185 07:02:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:02.444 256+0 records in 00:06:02.444 256+0 records out 00:06:02.445 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0199902 s, 52.5 MB/s 00:06:02.445 07:02:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.445 07:02:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:02.445 256+0 records in 00:06:02.445 256+0 records out 00:06:02.445 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213152 s, 49.2 MB/s 00:06:02.445 07:02:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:02.445 07:02:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.445 07:02:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.445 07:02:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:02.445 07:02:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.445 07:02:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:02.445 07:02:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:02.445 07:02:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.445 07:02:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:02.445 07:02:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.445 07:02:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:02.445 07:02:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.445 07:02:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:02.445 07:02:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.445 07:02:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.445 07:02:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:02.445 07:02:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:02.445 07:02:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.445 07:02:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:02.445 07:02:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:02.705 07:02:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:02.705 07:02:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:02.706 07:02:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.706 07:02:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.706 07:02:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:02.706 07:02:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.706 07:02:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.706 07:02:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.706 07:02:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:02.706 07:02:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:02.706 07:02:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:02.706 07:02:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:02.706 07:02:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.706 07:02:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.706 07:02:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:02.706 07:02:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.706 07:02:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.706 07:02:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.706 07:02:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.706 07:02:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.964 07:02:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:02.964 07:02:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:02.964 07:02:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.964 07:02:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:02.964 07:02:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:02.964 07:02:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.964 07:02:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:02.964 07:02:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:02.964 07:02:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:02.964 07:02:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:02.964 07:02:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:02.964 07:02:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:02.964 07:02:07 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:03.221 07:02:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:03.480 [2024-11-20 07:02:07.808858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.480 [2024-11-20 07:02:07.845639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.480 [2024-11-20 07:02:07.845642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.480 [2024-11-20 07:02:07.886057] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:03.480 [2024-11-20 07:02:07.886101] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:06.765 07:02:10 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3766896 /var/tmp/spdk-nbd.sock 00:06:06.765 07:02:10 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3766896 ']' 00:06:06.765 07:02:10 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:06.765 07:02:10 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:06.765 07:02:10 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:06.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:06.765 07:02:10 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:06.765 07:02:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.765 07:02:10 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:06.765 07:02:10 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:06.765 07:02:10 event.app_repeat -- event/event.sh@39 -- # killprocess 3766896 00:06:06.765 07:02:10 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 3766896 ']' 00:06:06.765 07:02:10 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 3766896 00:06:06.765 07:02:10 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:06:06.765 07:02:10 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:06.765 07:02:10 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3766896 00:06:06.765 07:02:10 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:06.765 07:02:10 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:06.765 07:02:10 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3766896' 00:06:06.765 killing process with pid 3766896 00:06:06.765 07:02:10 event.app_repeat -- common/autotest_common.sh@971 -- # kill 3766896 00:06:06.765 07:02:10 event.app_repeat -- common/autotest_common.sh@976 -- # wait 3766896 00:06:06.765 spdk_app_start is called in Round 0. 00:06:06.765 Shutdown signal received, stop current app iteration 00:06:06.765 Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 reinitialization... 00:06:06.765 spdk_app_start is called in Round 1. 00:06:06.765 Shutdown signal received, stop current app iteration 00:06:06.765 Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 reinitialization... 00:06:06.765 spdk_app_start is called in Round 2. 00:06:06.765 Shutdown signal received, stop current app iteration 00:06:06.765 Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 reinitialization... 00:06:06.765 spdk_app_start is called in Round 3. 00:06:06.765 Shutdown signal received, stop current app iteration 00:06:06.765 07:02:11 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:06.765 07:02:11 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:06.765 00:06:06.765 real 0m16.188s 00:06:06.765 user 0m34.789s 00:06:06.765 sys 0m3.150s 00:06:06.765 07:02:11 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:06.765 07:02:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.765 ************************************ 00:06:06.765 END TEST app_repeat 00:06:06.765 ************************************ 00:06:06.765 07:02:11 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:06.765 07:02:11 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:06.765 07:02:11 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:06.765 07:02:11 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:06.765 07:02:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.765 ************************************ 00:06:06.765 START TEST cpu_locks 00:06:06.765 ************************************ 00:06:06.766 07:02:11 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:06.766 * Looking for test storage... 00:06:06.766 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:06.766 07:02:11 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:06.766 07:02:11 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:06:06.766 07:02:11 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:06.766 07:02:11 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:06.766 07:02:11 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.766 07:02:11 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.766 07:02:11 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.766 07:02:11 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.766 07:02:11 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.766 07:02:11 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.766 07:02:11 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.766 07:02:11 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.766 07:02:11 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.766 07:02:11 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.766 07:02:11 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.766 07:02:11 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:06.766 07:02:11 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:06.766 07:02:11 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.766 07:02:11 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.766 07:02:11 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:06.766 07:02:11 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:06.766 07:02:11 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.766 07:02:11 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:06.766 07:02:11 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.766 07:02:11 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:06.766 07:02:11 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:06.766 07:02:11 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.766 07:02:11 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:06.766 07:02:11 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.766 07:02:11 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.766 07:02:11 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.766 07:02:11 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:06.766 07:02:11 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.766 07:02:11 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:06.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.766 --rc genhtml_branch_coverage=1 00:06:06.766 --rc genhtml_function_coverage=1 00:06:06.766 --rc genhtml_legend=1 00:06:06.766 --rc geninfo_all_blocks=1 00:06:06.766 --rc geninfo_unexecuted_blocks=1 00:06:06.766 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:06.766 ' 00:06:06.766 07:02:11 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:06.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.766 --rc genhtml_branch_coverage=1 00:06:06.766 --rc genhtml_function_coverage=1 00:06:06.766 --rc genhtml_legend=1 00:06:06.766 --rc geninfo_all_blocks=1 00:06:06.766 --rc geninfo_unexecuted_blocks=1 00:06:06.766 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:06.766 ' 00:06:06.766 07:02:11 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:06.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.766 --rc genhtml_branch_coverage=1 00:06:06.766 --rc genhtml_function_coverage=1 00:06:06.766 --rc genhtml_legend=1 00:06:06.766 --rc geninfo_all_blocks=1 00:06:06.766 --rc geninfo_unexecuted_blocks=1 00:06:06.766 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:06.766 ' 00:06:06.766 07:02:11 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:06.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.766 --rc genhtml_branch_coverage=1 00:06:06.766 --rc genhtml_function_coverage=1 00:06:06.766 --rc genhtml_legend=1 00:06:06.766 --rc geninfo_all_blocks=1 00:06:06.766 --rc geninfo_unexecuted_blocks=1 00:06:06.766 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:06.766 ' 00:06:06.766 07:02:11 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:06.766 07:02:11 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:06.766 07:02:11 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:06.766 07:02:11 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:06.766 07:02:11 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:06.766 07:02:11 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:06.766 07:02:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.766 ************************************ 00:06:06.766 START TEST default_locks 00:06:06.766 ************************************ 00:06:06.766 07:02:11 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:06:06.767 07:02:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3769906 00:06:06.767 07:02:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3769906 00:06:06.767 07:02:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.767 07:02:11 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 3769906 ']' 00:06:06.767 07:02:11 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.767 07:02:11 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:06.767 07:02:11 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.767 07:02:11 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:06.767 07:02:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.767 [2024-11-20 07:02:11.310823] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:06:06.767 [2024-11-20 07:02:11.310874] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3769906 ] 00:06:07.026 [2024-11-20 07:02:11.380162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.026 [2024-11-20 07:02:11.426726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.285 07:02:11 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:07.285 07:02:11 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:06:07.285 07:02:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3769906 00:06:07.285 07:02:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3769906 00:06:07.285 07:02:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.544 lslocks: write error 00:06:07.544 07:02:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3769906 00:06:07.544 07:02:11 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 3769906 ']' 00:06:07.544 07:02:11 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 3769906 00:06:07.544 07:02:11 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:06:07.544 07:02:11 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:07.544 07:02:11 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3769906 00:06:07.544 07:02:11 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:07.544 07:02:11 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:07.544 07:02:11 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3769906' 00:06:07.544 killing process with pid 3769906 00:06:07.544 07:02:11 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 3769906 00:06:07.544 07:02:11 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 3769906 00:06:07.806 07:02:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3769906 00:06:07.806 07:02:12 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:07.806 07:02:12 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3769906 00:06:07.806 07:02:12 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:07.806 07:02:12 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.806 07:02:12 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:07.806 07:02:12 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.806 07:02:12 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 3769906 00:06:07.806 07:02:12 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 3769906 ']' 00:06:07.806 07:02:12 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.806 07:02:12 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:07.806 07:02:12 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.806 07:02:12 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:07.806 07:02:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.806 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3769906) - No such process 00:06:07.806 ERROR: process (pid: 3769906) is no longer running 00:06:07.806 07:02:12 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:07.806 07:02:12 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:06:07.806 07:02:12 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:07.806 07:02:12 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:07.806 07:02:12 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:07.806 07:02:12 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:07.806 07:02:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:07.806 07:02:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:07.806 07:02:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:07.806 07:02:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:07.806 00:06:07.806 real 0m0.960s 00:06:07.806 user 0m0.897s 00:06:07.806 sys 0m0.465s 00:06:07.806 07:02:12 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:07.806 07:02:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.806 ************************************ 00:06:07.806 END TEST default_locks 00:06:07.806 ************************************ 00:06:07.806 07:02:12 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:07.806 07:02:12 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:07.806 07:02:12 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:07.806 07:02:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.806 ************************************ 00:06:07.806 START TEST default_locks_via_rpc 00:06:07.806 ************************************ 00:06:07.806 07:02:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:06:07.806 07:02:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3770095 00:06:07.806 07:02:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.806 07:02:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3770095 00:06:07.806 07:02:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3770095 ']' 00:06:07.806 07:02:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.806 07:02:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:07.806 07:02:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.806 07:02:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:07.806 07:02:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.806 [2024-11-20 07:02:12.360790] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:06:07.806 [2024-11-20 07:02:12.360858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3770095 ] 00:06:08.065 [2024-11-20 07:02:12.430127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.065 [2024-11-20 07:02:12.473246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.324 07:02:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:08.324 07:02:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:08.324 07:02:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:08.324 07:02:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.324 07:02:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.324 07:02:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.324 07:02:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:08.324 07:02:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:08.324 07:02:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:08.324 07:02:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:08.324 07:02:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:08.324 07:02:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.324 07:02:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.324 07:02:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.324 07:02:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3770095 00:06:08.324 07:02:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3770095 00:06:08.324 07:02:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:08.892 07:02:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3770095 00:06:08.892 07:02:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 3770095 ']' 00:06:08.892 07:02:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 3770095 00:06:08.892 07:02:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:06:08.892 07:02:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:08.892 07:02:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3770095 00:06:08.892 07:02:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:08.892 07:02:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:08.892 07:02:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3770095' 00:06:08.892 killing process with pid 3770095 00:06:08.892 07:02:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 3770095 00:06:08.892 07:02:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 3770095 00:06:09.151 00:06:09.151 real 0m1.350s 00:06:09.151 user 0m1.356s 00:06:09.151 sys 0m0.623s 00:06:09.151 07:02:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:09.151 07:02:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.151 ************************************ 00:06:09.151 END TEST default_locks_via_rpc 00:06:09.151 ************************************ 00:06:09.410 07:02:13 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:09.410 07:02:13 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:09.410 07:02:13 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:09.410 07:02:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.410 ************************************ 00:06:09.410 START TEST non_locking_app_on_locked_coremask 00:06:09.410 ************************************ 00:06:09.410 07:02:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:06:09.410 07:02:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3770392 00:06:09.410 07:02:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3770392 /var/tmp/spdk.sock 00:06:09.410 07:02:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3770392 ']' 00:06:09.410 07:02:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.410 07:02:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:09.410 07:02:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.410 07:02:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:09.410 07:02:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.410 07:02:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.410 [2024-11-20 07:02:13.788960] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:06:09.411 [2024-11-20 07:02:13.789019] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3770392 ] 00:06:09.411 [2024-11-20 07:02:13.859807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.411 [2024-11-20 07:02:13.902110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.669 07:02:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:09.669 07:02:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:09.669 07:02:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3770398 00:06:09.669 07:02:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3770398 /var/tmp/spdk2.sock 00:06:09.669 07:02:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3770398 ']' 00:06:09.669 07:02:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:09.669 07:02:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.670 07:02:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:09.670 07:02:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.670 07:02:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:09.670 07:02:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.670 [2024-11-20 07:02:14.131528] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:06:09.670 [2024-11-20 07:02:14.131636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3770398 ] 00:06:09.928 [2024-11-20 07:02:14.235490] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:09.928 [2024-11-20 07:02:14.235521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.928 [2024-11-20 07:02:14.323008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.495 07:02:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:10.495 07:02:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:10.495 07:02:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3770392 00:06:10.495 07:02:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3770392 00:06:10.495 07:02:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.871 lslocks: write error 00:06:11.871 07:02:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3770392 00:06:11.871 07:02:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3770392 ']' 00:06:11.871 07:02:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3770392 00:06:11.871 07:02:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:11.871 07:02:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:11.871 07:02:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3770392 00:06:11.871 07:02:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:11.871 07:02:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:11.871 07:02:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3770392' 00:06:11.871 killing process with pid 3770392 00:06:11.871 07:02:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3770392 00:06:11.871 07:02:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3770392 00:06:12.439 07:02:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3770398 00:06:12.439 07:02:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3770398 ']' 00:06:12.439 07:02:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3770398 00:06:12.439 07:02:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:12.439 07:02:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:12.439 07:02:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3770398 00:06:12.439 07:02:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:12.439 07:02:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:12.439 07:02:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3770398' 00:06:12.439 killing process with pid 3770398 00:06:12.439 07:02:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3770398 00:06:12.439 07:02:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3770398 00:06:12.698 00:06:12.698 real 0m3.305s 00:06:12.698 user 0m3.474s 00:06:12.698 sys 0m1.242s 00:06:12.698 07:02:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:12.698 07:02:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.698 ************************************ 00:06:12.698 END TEST non_locking_app_on_locked_coremask 00:06:12.698 ************************************ 00:06:12.698 07:02:17 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:12.698 07:02:17 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:12.698 07:02:17 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:12.698 07:02:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.698 ************************************ 00:06:12.698 START TEST locking_app_on_unlocked_coremask 00:06:12.698 ************************************ 00:06:12.698 07:02:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:06:12.698 07:02:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3770967 00:06:12.698 07:02:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3770967 /var/tmp/spdk.sock 00:06:12.698 07:02:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:12.698 07:02:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3770967 ']' 00:06:12.698 07:02:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.698 07:02:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:12.698 07:02:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.698 07:02:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:12.698 07:02:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.698 [2024-11-20 07:02:17.180035] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:06:12.698 [2024-11-20 07:02:17.180100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3770967 ] 00:06:12.698 [2024-11-20 07:02:17.251255] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.698 [2024-11-20 07:02:17.251281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.957 [2024-11-20 07:02:17.293962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.957 07:02:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:12.957 07:02:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:12.957 07:02:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3771099 00:06:12.957 07:02:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3771099 /var/tmp/spdk2.sock 00:06:12.957 07:02:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:12.957 07:02:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3771099 ']' 00:06:12.957 07:02:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.957 07:02:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:12.957 07:02:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.957 07:02:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:12.957 07:02:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.216 [2024-11-20 07:02:17.524687] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:06:13.216 [2024-11-20 07:02:17.524754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3771099 ] 00:06:13.216 [2024-11-20 07:02:17.620100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.216 [2024-11-20 07:02:17.700224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.152 07:02:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:14.152 07:02:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:14.152 07:02:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3771099 00:06:14.152 07:02:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3771099 00:06:14.152 07:02:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.089 lslocks: write error 00:06:15.089 07:02:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3770967 00:06:15.089 07:02:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3770967 ']' 00:06:15.089 07:02:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 3770967 00:06:15.089 07:02:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:15.089 07:02:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:15.089 07:02:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3770967 00:06:15.089 07:02:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:15.089 07:02:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:15.089 07:02:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3770967' 00:06:15.089 killing process with pid 3770967 00:06:15.089 07:02:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 3770967 00:06:15.089 07:02:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 3770967 00:06:15.656 07:02:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3771099 00:06:15.656 07:02:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3771099 ']' 00:06:15.656 07:02:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 3771099 00:06:15.656 07:02:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:15.656 07:02:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:15.656 07:02:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3771099 00:06:15.915 07:02:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:15.915 07:02:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:15.915 07:02:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3771099' 00:06:15.915 killing process with pid 3771099 00:06:15.915 07:02:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 3771099 00:06:15.915 07:02:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 3771099 00:06:16.173 00:06:16.173 real 0m3.369s 00:06:16.173 user 0m3.559s 00:06:16.173 sys 0m1.263s 00:06:16.173 07:02:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:16.174 07:02:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.174 ************************************ 00:06:16.174 END TEST locking_app_on_unlocked_coremask 00:06:16.174 ************************************ 00:06:16.174 07:02:20 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:16.174 07:02:20 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:16.174 07:02:20 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:16.174 07:02:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.174 ************************************ 00:06:16.174 START TEST locking_app_on_locked_coremask 00:06:16.174 ************************************ 00:06:16.174 07:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:06:16.174 07:02:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3771678 00:06:16.174 07:02:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3771678 /var/tmp/spdk.sock 00:06:16.174 07:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3771678 ']' 00:06:16.174 07:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.174 07:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:16.174 07:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.174 07:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:16.174 07:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.174 07:02:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.174 [2024-11-20 07:02:20.633058] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:06:16.174 [2024-11-20 07:02:20.633121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3771678 ] 00:06:16.174 [2024-11-20 07:02:20.704094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.433 [2024-11-20 07:02:20.747590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.433 07:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:16.433 07:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:16.433 07:02:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3771797 00:06:16.433 07:02:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3771797 /var/tmp/spdk2.sock 00:06:16.433 07:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:16.433 07:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3771797 /var/tmp/spdk2.sock 00:06:16.433 07:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:16.433 07:02:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:16.433 07:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.433 07:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:16.433 07:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.433 07:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3771797 /var/tmp/spdk2.sock 00:06:16.433 07:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3771797 ']' 00:06:16.433 07:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.433 07:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:16.433 07:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.433 07:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:16.433 07:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.433 [2024-11-20 07:02:20.975198] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:06:16.433 [2024-11-20 07:02:20.975264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3771797 ] 00:06:16.692 [2024-11-20 07:02:21.067490] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3771678 has claimed it. 00:06:16.692 [2024-11-20 07:02:21.067520] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:17.260 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3771797) - No such process 00:06:17.260 ERROR: process (pid: 3771797) is no longer running 00:06:17.260 07:02:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:17.260 07:02:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:17.260 07:02:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:17.260 07:02:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:17.260 07:02:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:17.260 07:02:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:17.260 07:02:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3771678 00:06:17.260 07:02:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3771678 00:06:17.260 07:02:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.829 lslocks: write error 00:06:17.829 07:02:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3771678 00:06:17.829 07:02:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3771678 ']' 00:06:17.829 07:02:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3771678 00:06:17.829 07:02:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:17.829 07:02:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:17.829 07:02:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3771678 00:06:17.829 07:02:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:17.829 07:02:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:17.829 07:02:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3771678' 00:06:17.829 killing process with pid 3771678 00:06:17.829 07:02:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3771678 00:06:17.829 07:02:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3771678 00:06:18.088 00:06:18.088 real 0m1.959s 00:06:18.088 user 0m2.079s 00:06:18.088 sys 0m0.713s 00:06:18.088 07:02:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:18.088 07:02:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.088 ************************************ 00:06:18.088 END TEST locking_app_on_locked_coremask 00:06:18.088 ************************************ 00:06:18.088 07:02:22 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:18.088 07:02:22 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:18.088 07:02:22 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:18.088 07:02:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.347 ************************************ 00:06:18.347 START TEST locking_overlapped_coremask 00:06:18.347 ************************************ 00:06:18.347 07:02:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:06:18.347 07:02:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3772091 00:06:18.347 07:02:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3772091 /var/tmp/spdk.sock 00:06:18.347 07:02:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:18.347 07:02:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 3772091 ']' 00:06:18.347 07:02:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.347 07:02:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:18.347 07:02:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.347 07:02:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:18.347 07:02:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.347 [2024-11-20 07:02:22.677657] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:06:18.347 [2024-11-20 07:02:22.677723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3772091 ] 00:06:18.347 [2024-11-20 07:02:22.765562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:18.347 [2024-11-20 07:02:22.806506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.347 [2024-11-20 07:02:22.806611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.347 [2024-11-20 07:02:22.806610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.606 07:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:18.606 07:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:18.606 07:02:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3772106 00:06:18.606 07:02:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3772106 /var/tmp/spdk2.sock 00:06:18.606 07:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:18.606 07:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3772106 /var/tmp/spdk2.sock 00:06:18.606 07:02:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:18.606 07:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:18.606 07:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.606 07:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:18.606 07:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.606 07:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3772106 /var/tmp/spdk2.sock 00:06:18.606 07:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 3772106 ']' 00:06:18.606 07:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.606 07:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:18.606 07:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.606 07:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:18.606 07:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.606 [2024-11-20 07:02:23.041860] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:06:18.606 [2024-11-20 07:02:23.041921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3772106 ] 00:06:18.606 [2024-11-20 07:02:23.141980] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3772091 has claimed it. 00:06:18.606 [2024-11-20 07:02:23.142022] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:19.173 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3772106) - No such process 00:06:19.173 ERROR: process (pid: 3772106) is no longer running 00:06:19.173 07:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:19.173 07:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:19.173 07:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:19.173 07:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:19.173 07:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:19.173 07:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:19.173 07:02:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:19.173 07:02:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:19.173 07:02:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:19.173 07:02:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:19.173 07:02:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3772091 00:06:19.173 07:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 3772091 ']' 00:06:19.173 07:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 3772091 00:06:19.173 07:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:06:19.173 07:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:19.173 07:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3772091 00:06:19.432 07:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:19.432 07:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:19.432 07:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3772091' 00:06:19.432 killing process with pid 3772091 00:06:19.432 07:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 3772091 00:06:19.432 07:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 3772091 00:06:19.691 00:06:19.691 real 0m1.422s 00:06:19.691 user 0m3.909s 00:06:19.691 sys 0m0.428s 00:06:19.691 07:02:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:19.691 07:02:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.691 ************************************ 00:06:19.691 END TEST locking_overlapped_coremask 00:06:19.691 ************************************ 00:06:19.691 07:02:24 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:19.691 07:02:24 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:19.691 07:02:24 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:19.691 07:02:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.691 ************************************ 00:06:19.691 START TEST locking_overlapped_coremask_via_rpc 00:06:19.691 ************************************ 00:06:19.691 07:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:06:19.691 07:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3772394 00:06:19.691 07:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3772394 /var/tmp/spdk.sock 00:06:19.691 07:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3772394 ']' 00:06:19.691 07:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.691 07:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:19.691 07:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.691 07:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:19.691 07:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.691 07:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:19.691 [2024-11-20 07:02:24.177017] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:06:19.691 [2024-11-20 07:02:24.177087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3772394 ] 00:06:19.950 [2024-11-20 07:02:24.250048] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:19.950 [2024-11-20 07:02:24.250074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:19.950 [2024-11-20 07:02:24.295936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.950 [2024-11-20 07:02:24.296031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.950 [2024-11-20 07:02:24.296031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.950 07:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:19.951 07:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:19.951 07:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3772408 00:06:19.951 07:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3772408 /var/tmp/spdk2.sock 00:06:19.951 07:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3772408 ']' 00:06:19.951 07:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.951 07:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:19.951 07:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.951 07:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:19.951 07:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.951 07:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:20.209 [2024-11-20 07:02:24.526447] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:06:20.209 [2024-11-20 07:02:24.526526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3772408 ] 00:06:20.209 [2024-11-20 07:02:24.626829] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:20.209 [2024-11-20 07:02:24.626857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:20.209 [2024-11-20 07:02:24.710142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:20.209 [2024-11-20 07:02:24.713644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.209 [2024-11-20 07:02:24.713646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.146 [2024-11-20 07:02:25.404671] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3772394 has claimed it. 00:06:21.146 request: 00:06:21.146 { 00:06:21.146 "method": "framework_enable_cpumask_locks", 00:06:21.146 "req_id": 1 00:06:21.146 } 00:06:21.146 Got JSON-RPC error response 00:06:21.146 response: 00:06:21.146 { 00:06:21.146 "code": -32603, 00:06:21.146 "message": "Failed to claim CPU core: 2" 00:06:21.146 } 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3772394 /var/tmp/spdk.sock 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3772394 ']' 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3772408 /var/tmp/spdk2.sock 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3772408 ']' 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:21.146 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.405 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:21.405 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:21.405 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:21.405 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:21.405 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:21.405 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:21.405 00:06:21.405 real 0m1.660s 00:06:21.405 user 0m0.764s 00:06:21.405 sys 0m0.170s 00:06:21.405 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:21.405 07:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.405 ************************************ 00:06:21.405 END TEST locking_overlapped_coremask_via_rpc 00:06:21.405 ************************************ 00:06:21.405 07:02:25 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:21.405 07:02:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3772394 ]] 00:06:21.405 07:02:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3772394 00:06:21.405 07:02:25 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3772394 ']' 00:06:21.405 07:02:25 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3772394 00:06:21.405 07:02:25 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:21.405 07:02:25 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:21.405 07:02:25 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3772394 00:06:21.405 07:02:25 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:21.405 07:02:25 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:21.405 07:02:25 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3772394' 00:06:21.405 killing process with pid 3772394 00:06:21.405 07:02:25 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 3772394 00:06:21.405 07:02:25 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 3772394 00:06:21.973 07:02:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3772408 ]] 00:06:21.973 07:02:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3772408 00:06:21.973 07:02:26 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3772408 ']' 00:06:21.973 07:02:26 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3772408 00:06:21.973 07:02:26 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:21.973 07:02:26 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:21.973 07:02:26 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3772408 00:06:21.973 07:02:26 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:06:21.973 07:02:26 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:06:21.973 07:02:26 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3772408' 00:06:21.973 killing process with pid 3772408 00:06:21.973 07:02:26 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 3772408 00:06:21.973 07:02:26 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 3772408 00:06:22.231 07:02:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:22.232 07:02:26 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:22.232 07:02:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3772394 ]] 00:06:22.232 07:02:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3772394 00:06:22.232 07:02:26 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3772394 ']' 00:06:22.232 07:02:26 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3772394 00:06:22.232 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3772394) - No such process 00:06:22.232 07:02:26 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 3772394 is not found' 00:06:22.232 Process with pid 3772394 is not found 00:06:22.232 07:02:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3772408 ]] 00:06:22.232 07:02:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3772408 00:06:22.232 07:02:26 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3772408 ']' 00:06:22.232 07:02:26 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3772408 00:06:22.232 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3772408) - No such process 00:06:22.232 07:02:26 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 3772408 is not found' 00:06:22.232 Process with pid 3772408 is not found 00:06:22.232 07:02:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:22.232 00:06:22.232 real 0m15.500s 00:06:22.232 user 0m25.825s 00:06:22.232 sys 0m5.958s 00:06:22.232 07:02:26 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:22.232 07:02:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.232 ************************************ 00:06:22.232 END TEST cpu_locks 00:06:22.232 ************************************ 00:06:22.232 00:06:22.232 real 0m40.334s 00:06:22.232 user 1m14.942s 00:06:22.232 sys 0m10.252s 00:06:22.232 07:02:26 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:22.232 07:02:26 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.232 ************************************ 00:06:22.232 END TEST event 00:06:22.232 ************************************ 00:06:22.232 07:02:26 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:22.232 07:02:26 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:22.232 07:02:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:22.232 07:02:26 -- common/autotest_common.sh@10 -- # set +x 00:06:22.232 ************************************ 00:06:22.232 START TEST thread 00:06:22.232 ************************************ 00:06:22.232 07:02:26 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:22.491 * Looking for test storage... 00:06:22.491 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:06:22.491 07:02:26 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:22.491 07:02:26 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:06:22.491 07:02:26 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:22.491 07:02:26 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:22.491 07:02:26 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.491 07:02:26 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.491 07:02:26 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.491 07:02:26 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.491 07:02:26 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.491 07:02:26 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.491 07:02:26 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.491 07:02:26 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.491 07:02:26 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.491 07:02:26 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.491 07:02:26 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.491 07:02:26 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:22.491 07:02:26 thread -- scripts/common.sh@345 -- # : 1 00:06:22.491 07:02:26 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.491 07:02:26 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.491 07:02:26 thread -- scripts/common.sh@365 -- # decimal 1 00:06:22.491 07:02:26 thread -- scripts/common.sh@353 -- # local d=1 00:06:22.491 07:02:26 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.491 07:02:26 thread -- scripts/common.sh@355 -- # echo 1 00:06:22.491 07:02:26 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.491 07:02:26 thread -- scripts/common.sh@366 -- # decimal 2 00:06:22.491 07:02:26 thread -- scripts/common.sh@353 -- # local d=2 00:06:22.491 07:02:26 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.491 07:02:26 thread -- scripts/common.sh@355 -- # echo 2 00:06:22.491 07:02:26 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.491 07:02:26 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.491 07:02:26 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.491 07:02:26 thread -- scripts/common.sh@368 -- # return 0 00:06:22.491 07:02:26 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.491 07:02:26 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:22.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.491 --rc genhtml_branch_coverage=1 00:06:22.491 --rc genhtml_function_coverage=1 00:06:22.491 --rc genhtml_legend=1 00:06:22.491 --rc geninfo_all_blocks=1 00:06:22.491 --rc geninfo_unexecuted_blocks=1 00:06:22.491 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:22.491 ' 00:06:22.491 07:02:26 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:22.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.491 --rc genhtml_branch_coverage=1 00:06:22.491 --rc genhtml_function_coverage=1 00:06:22.491 --rc genhtml_legend=1 00:06:22.491 --rc geninfo_all_blocks=1 00:06:22.491 --rc geninfo_unexecuted_blocks=1 00:06:22.491 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:22.491 ' 00:06:22.491 07:02:26 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:22.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.491 --rc genhtml_branch_coverage=1 00:06:22.491 --rc genhtml_function_coverage=1 00:06:22.491 --rc genhtml_legend=1 00:06:22.491 --rc geninfo_all_blocks=1 00:06:22.491 --rc geninfo_unexecuted_blocks=1 00:06:22.491 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:22.491 ' 00:06:22.491 07:02:26 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:22.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.491 --rc genhtml_branch_coverage=1 00:06:22.491 --rc genhtml_function_coverage=1 00:06:22.491 --rc genhtml_legend=1 00:06:22.491 --rc geninfo_all_blocks=1 00:06:22.491 --rc geninfo_unexecuted_blocks=1 00:06:22.491 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:22.491 ' 00:06:22.491 07:02:26 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:22.491 07:02:26 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:22.491 07:02:26 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:22.491 07:02:26 thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.491 ************************************ 00:06:22.491 START TEST thread_poller_perf 00:06:22.491 ************************************ 00:06:22.491 07:02:26 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:22.491 [2024-11-20 07:02:26.968036] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:06:22.492 [2024-11-20 07:02:26.968116] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3773039 ] 00:06:22.492 [2024-11-20 07:02:27.040055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.750 [2024-11-20 07:02:27.079337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.750 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:23.685 [2024-11-20T06:02:28.241Z] ====================================== 00:06:23.685 [2024-11-20T06:02:28.241Z] busy:2503565798 (cyc) 00:06:23.685 [2024-11-20T06:02:28.241Z] total_run_count: 862000 00:06:23.685 [2024-11-20T06:02:28.241Z] tsc_hz: 2500000000 (cyc) 00:06:23.685 [2024-11-20T06:02:28.241Z] ====================================== 00:06:23.685 [2024-11-20T06:02:28.241Z] poller_cost: 2904 (cyc), 1161 (nsec) 00:06:23.685 00:06:23.685 real 0m1.165s 00:06:23.685 user 0m1.081s 00:06:23.685 sys 0m0.081s 00:06:23.685 07:02:28 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:23.685 07:02:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:23.685 ************************************ 00:06:23.685 END TEST thread_poller_perf 00:06:23.685 ************************************ 00:06:23.685 07:02:28 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:23.685 07:02:28 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:23.685 07:02:28 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:23.685 07:02:28 thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.685 ************************************ 00:06:23.685 START TEST thread_poller_perf 00:06:23.685 ************************************ 00:06:23.685 07:02:28 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:23.685 [2024-11-20 07:02:28.187850] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:06:23.685 [2024-11-20 07:02:28.187892] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3773174 ] 00:06:23.944 [2024-11-20 07:02:28.255045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.944 [2024-11-20 07:02:28.293838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.944 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:24.880 [2024-11-20T06:02:29.436Z] ====================================== 00:06:24.880 [2024-11-20T06:02:29.436Z] busy:2501414282 (cyc) 00:06:24.880 [2024-11-20T06:02:29.436Z] total_run_count: 13563000 00:06:24.880 [2024-11-20T06:02:29.436Z] tsc_hz: 2500000000 (cyc) 00:06:24.880 [2024-11-20T06:02:29.436Z] ====================================== 00:06:24.880 [2024-11-20T06:02:29.436Z] poller_cost: 184 (cyc), 73 (nsec) 00:06:24.880 00:06:24.880 real 0m1.148s 00:06:24.880 user 0m1.074s 00:06:24.880 sys 0m0.070s 00:06:24.880 07:02:29 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:24.880 07:02:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:24.880 ************************************ 00:06:24.880 END TEST thread_poller_perf 00:06:24.880 ************************************ 00:06:24.880 07:02:29 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:06:24.880 07:02:29 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:24.880 07:02:29 thread -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:24.880 07:02:29 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:24.880 07:02:29 thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.880 ************************************ 00:06:24.880 START TEST thread_spdk_lock 00:06:24.880 ************************************ 00:06:24.880 07:02:29 thread.thread_spdk_lock -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:24.880 [2024-11-20 07:02:29.419362] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:06:24.880 [2024-11-20 07:02:29.419417] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3773358 ] 00:06:25.138 [2024-11-20 07:02:29.487790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.138 [2024-11-20 07:02:29.528651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.138 [2024-11-20 07:02:29.528654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.703 [2024-11-20 07:02:30.026161] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 980:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:25.703 [2024-11-20 07:02:30.026201] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3112:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:06:25.703 [2024-11-20 07:02:30.026212] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3067:sspin_stacks_print: *ERROR*: spinlock 0x14daa80 00:06:25.703 [2024-11-20 07:02:30.026959] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:25.703 [2024-11-20 07:02:30.027062] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1041:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:25.703 [2024-11-20 07:02:30.027086] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:25.703 Starting test contend 00:06:25.703 Worker Delay Wait us Hold us Total us 00:06:25.703 0 3 176737 189004 365741 00:06:25.703 1 5 92003 289657 381660 00:06:25.703 PASS test contend 00:06:25.703 Starting test hold_by_poller 00:06:25.703 PASS test hold_by_poller 00:06:25.703 Starting test hold_by_message 00:06:25.703 PASS test hold_by_message 00:06:25.703 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:06:25.703 100014 assertions passed 00:06:25.703 0 assertions failed 00:06:25.703 00:06:25.703 real 0m0.655s 00:06:25.703 user 0m1.071s 00:06:25.703 sys 0m0.077s 00:06:25.703 07:02:30 thread.thread_spdk_lock -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:25.703 07:02:30 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:06:25.703 ************************************ 00:06:25.703 END TEST thread_spdk_lock 00:06:25.703 ************************************ 00:06:25.703 00:06:25.703 real 0m3.386s 00:06:25.703 user 0m3.413s 00:06:25.703 sys 0m0.494s 00:06:25.703 07:02:30 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:25.703 07:02:30 thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.703 ************************************ 00:06:25.703 END TEST thread 00:06:25.703 ************************************ 00:06:25.703 07:02:30 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:25.703 07:02:30 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:25.703 07:02:30 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:25.703 07:02:30 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:25.703 07:02:30 -- common/autotest_common.sh@10 -- # set +x 00:06:25.703 ************************************ 00:06:25.703 START TEST app_cmdline 00:06:25.703 ************************************ 00:06:25.703 07:02:30 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:25.961 * Looking for test storage... 00:06:25.961 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:25.961 07:02:30 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:25.961 07:02:30 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:06:25.961 07:02:30 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:25.961 07:02:30 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:25.961 07:02:30 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.961 07:02:30 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.961 07:02:30 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.961 07:02:30 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.961 07:02:30 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.961 07:02:30 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.961 07:02:30 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.961 07:02:30 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.961 07:02:30 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.961 07:02:30 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.961 07:02:30 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.961 07:02:30 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:25.961 07:02:30 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:25.961 07:02:30 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.961 07:02:30 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.961 07:02:30 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:25.961 07:02:30 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:25.961 07:02:30 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.961 07:02:30 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:25.962 07:02:30 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.962 07:02:30 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:25.962 07:02:30 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:25.962 07:02:30 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.962 07:02:30 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:25.962 07:02:30 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.962 07:02:30 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.962 07:02:30 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.962 07:02:30 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:25.962 07:02:30 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.962 07:02:30 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:25.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.962 --rc genhtml_branch_coverage=1 00:06:25.962 --rc genhtml_function_coverage=1 00:06:25.962 --rc genhtml_legend=1 00:06:25.962 --rc geninfo_all_blocks=1 00:06:25.962 --rc geninfo_unexecuted_blocks=1 00:06:25.962 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:25.962 ' 00:06:25.962 07:02:30 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:25.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.962 --rc genhtml_branch_coverage=1 00:06:25.962 --rc genhtml_function_coverage=1 00:06:25.962 --rc genhtml_legend=1 00:06:25.962 --rc geninfo_all_blocks=1 00:06:25.962 --rc geninfo_unexecuted_blocks=1 00:06:25.962 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:25.962 ' 00:06:25.962 07:02:30 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:25.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.962 --rc genhtml_branch_coverage=1 00:06:25.962 --rc genhtml_function_coverage=1 00:06:25.962 --rc genhtml_legend=1 00:06:25.962 --rc geninfo_all_blocks=1 00:06:25.962 --rc geninfo_unexecuted_blocks=1 00:06:25.962 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:25.962 ' 00:06:25.962 07:02:30 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:25.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.962 --rc genhtml_branch_coverage=1 00:06:25.962 --rc genhtml_function_coverage=1 00:06:25.962 --rc genhtml_legend=1 00:06:25.962 --rc geninfo_all_blocks=1 00:06:25.962 --rc geninfo_unexecuted_blocks=1 00:06:25.962 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:25.962 ' 00:06:25.962 07:02:30 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:25.962 07:02:30 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3773683 00:06:25.962 07:02:30 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3773683 00:06:25.962 07:02:30 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:25.962 07:02:30 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 3773683 ']' 00:06:25.962 07:02:30 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.962 07:02:30 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:25.962 07:02:30 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.962 07:02:30 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:25.962 07:02:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:25.962 [2024-11-20 07:02:30.417289] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:06:25.962 [2024-11-20 07:02:30.417359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3773683 ] 00:06:25.962 [2024-11-20 07:02:30.488745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.220 [2024-11-20 07:02:30.532952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.220 07:02:30 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:26.220 07:02:30 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:06:26.220 07:02:30 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:26.479 { 00:06:26.479 "version": "SPDK v25.01-pre git sha1 6745f139b", 00:06:26.479 "fields": { 00:06:26.479 "major": 25, 00:06:26.479 "minor": 1, 00:06:26.479 "patch": 0, 00:06:26.479 "suffix": "-pre", 00:06:26.479 "commit": "6745f139b" 00:06:26.479 } 00:06:26.479 } 00:06:26.479 07:02:30 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:26.479 07:02:30 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:26.479 07:02:30 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:26.479 07:02:30 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:26.479 07:02:30 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:26.479 07:02:30 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:26.479 07:02:30 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.479 07:02:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:26.479 07:02:30 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:26.479 07:02:30 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.479 07:02:30 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:26.479 07:02:30 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:26.479 07:02:30 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:26.479 07:02:30 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:26.479 07:02:30 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:26.479 07:02:30 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:26.479 07:02:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.479 07:02:30 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:26.479 07:02:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.479 07:02:30 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:26.479 07:02:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.479 07:02:30 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:26.479 07:02:30 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:06:26.479 07:02:30 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:26.739 request: 00:06:26.739 { 00:06:26.739 "method": "env_dpdk_get_mem_stats", 00:06:26.739 "req_id": 1 00:06:26.739 } 00:06:26.739 Got JSON-RPC error response 00:06:26.739 response: 00:06:26.739 { 00:06:26.739 "code": -32601, 00:06:26.739 "message": "Method not found" 00:06:26.739 } 00:06:26.739 07:02:31 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:26.739 07:02:31 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:26.739 07:02:31 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:26.739 07:02:31 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:26.739 07:02:31 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3773683 00:06:26.739 07:02:31 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 3773683 ']' 00:06:26.739 07:02:31 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 3773683 00:06:26.739 07:02:31 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:06:26.739 07:02:31 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:26.739 07:02:31 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3773683 00:06:26.739 07:02:31 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:26.739 07:02:31 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:26.739 07:02:31 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3773683' 00:06:26.739 killing process with pid 3773683 00:06:26.739 07:02:31 app_cmdline -- common/autotest_common.sh@971 -- # kill 3773683 00:06:26.739 07:02:31 app_cmdline -- common/autotest_common.sh@976 -- # wait 3773683 00:06:26.998 00:06:26.998 real 0m1.341s 00:06:26.998 user 0m1.540s 00:06:26.998 sys 0m0.490s 00:06:26.998 07:02:31 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:26.998 07:02:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:26.998 ************************************ 00:06:26.998 END TEST app_cmdline 00:06:26.998 ************************************ 00:06:27.256 07:02:31 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:27.256 07:02:31 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:27.256 07:02:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:27.256 07:02:31 -- common/autotest_common.sh@10 -- # set +x 00:06:27.256 ************************************ 00:06:27.256 START TEST version 00:06:27.256 ************************************ 00:06:27.256 07:02:31 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:27.256 * Looking for test storage... 00:06:27.256 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:27.256 07:02:31 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:27.256 07:02:31 version -- common/autotest_common.sh@1691 -- # lcov --version 00:06:27.256 07:02:31 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:27.256 07:02:31 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:27.256 07:02:31 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.256 07:02:31 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.256 07:02:31 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.256 07:02:31 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.256 07:02:31 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.256 07:02:31 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.256 07:02:31 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.256 07:02:31 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.256 07:02:31 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.256 07:02:31 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.256 07:02:31 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.256 07:02:31 version -- scripts/common.sh@344 -- # case "$op" in 00:06:27.256 07:02:31 version -- scripts/common.sh@345 -- # : 1 00:06:27.256 07:02:31 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.256 07:02:31 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.256 07:02:31 version -- scripts/common.sh@365 -- # decimal 1 00:06:27.256 07:02:31 version -- scripts/common.sh@353 -- # local d=1 00:06:27.256 07:02:31 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.256 07:02:31 version -- scripts/common.sh@355 -- # echo 1 00:06:27.256 07:02:31 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.515 07:02:31 version -- scripts/common.sh@366 -- # decimal 2 00:06:27.515 07:02:31 version -- scripts/common.sh@353 -- # local d=2 00:06:27.515 07:02:31 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.515 07:02:31 version -- scripts/common.sh@355 -- # echo 2 00:06:27.515 07:02:31 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.515 07:02:31 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.515 07:02:31 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.515 07:02:31 version -- scripts/common.sh@368 -- # return 0 00:06:27.515 07:02:31 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.515 07:02:31 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:27.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.515 --rc genhtml_branch_coverage=1 00:06:27.515 --rc genhtml_function_coverage=1 00:06:27.515 --rc genhtml_legend=1 00:06:27.515 --rc geninfo_all_blocks=1 00:06:27.515 --rc geninfo_unexecuted_blocks=1 00:06:27.515 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:27.515 ' 00:06:27.515 07:02:31 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:27.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.515 --rc genhtml_branch_coverage=1 00:06:27.515 --rc genhtml_function_coverage=1 00:06:27.515 --rc genhtml_legend=1 00:06:27.515 --rc geninfo_all_blocks=1 00:06:27.515 --rc geninfo_unexecuted_blocks=1 00:06:27.515 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:27.515 ' 00:06:27.515 07:02:31 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:27.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.515 --rc genhtml_branch_coverage=1 00:06:27.515 --rc genhtml_function_coverage=1 00:06:27.515 --rc genhtml_legend=1 00:06:27.515 --rc geninfo_all_blocks=1 00:06:27.515 --rc geninfo_unexecuted_blocks=1 00:06:27.515 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:27.515 ' 00:06:27.515 07:02:31 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:27.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.515 --rc genhtml_branch_coverage=1 00:06:27.515 --rc genhtml_function_coverage=1 00:06:27.515 --rc genhtml_legend=1 00:06:27.515 --rc geninfo_all_blocks=1 00:06:27.515 --rc geninfo_unexecuted_blocks=1 00:06:27.515 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:27.515 ' 00:06:27.515 07:02:31 version -- app/version.sh@17 -- # get_header_version major 00:06:27.515 07:02:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:27.515 07:02:31 version -- app/version.sh@14 -- # cut -f2 00:06:27.515 07:02:31 version -- app/version.sh@14 -- # tr -d '"' 00:06:27.515 07:02:31 version -- app/version.sh@17 -- # major=25 00:06:27.515 07:02:31 version -- app/version.sh@18 -- # get_header_version minor 00:06:27.515 07:02:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:27.515 07:02:31 version -- app/version.sh@14 -- # cut -f2 00:06:27.515 07:02:31 version -- app/version.sh@14 -- # tr -d '"' 00:06:27.515 07:02:31 version -- app/version.sh@18 -- # minor=1 00:06:27.515 07:02:31 version -- app/version.sh@19 -- # get_header_version patch 00:06:27.515 07:02:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:27.515 07:02:31 version -- app/version.sh@14 -- # cut -f2 00:06:27.515 07:02:31 version -- app/version.sh@14 -- # tr -d '"' 00:06:27.515 07:02:31 version -- app/version.sh@19 -- # patch=0 00:06:27.515 07:02:31 version -- app/version.sh@20 -- # get_header_version suffix 00:06:27.515 07:02:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:27.515 07:02:31 version -- app/version.sh@14 -- # cut -f2 00:06:27.515 07:02:31 version -- app/version.sh@14 -- # tr -d '"' 00:06:27.515 07:02:31 version -- app/version.sh@20 -- # suffix=-pre 00:06:27.515 07:02:31 version -- app/version.sh@22 -- # version=25.1 00:06:27.515 07:02:31 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:27.515 07:02:31 version -- app/version.sh@28 -- # version=25.1rc0 00:06:27.515 07:02:31 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:27.515 07:02:31 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:27.515 07:02:31 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:27.515 07:02:31 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:27.515 00:06:27.515 real 0m0.269s 00:06:27.515 user 0m0.159s 00:06:27.515 sys 0m0.159s 00:06:27.515 07:02:31 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:27.515 07:02:31 version -- common/autotest_common.sh@10 -- # set +x 00:06:27.515 ************************************ 00:06:27.515 END TEST version 00:06:27.515 ************************************ 00:06:27.515 07:02:31 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:27.515 07:02:31 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:27.515 07:02:31 -- spdk/autotest.sh@194 -- # uname -s 00:06:27.515 07:02:31 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:27.515 07:02:31 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:27.515 07:02:31 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:27.515 07:02:31 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:27.515 07:02:31 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:27.515 07:02:31 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:27.515 07:02:31 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:27.515 07:02:31 -- common/autotest_common.sh@10 -- # set +x 00:06:27.515 07:02:31 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:27.515 07:02:31 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:06:27.515 07:02:31 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:06:27.515 07:02:31 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:06:27.515 07:02:31 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:06:27.515 07:02:31 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:06:27.515 07:02:31 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:06:27.515 07:02:31 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:06:27.515 07:02:31 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:06:27.515 07:02:31 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:06:27.515 07:02:31 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:06:27.515 07:02:31 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:06:27.515 07:02:31 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:06:27.515 07:02:31 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:06:27.515 07:02:31 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:06:27.515 07:02:31 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:06:27.515 07:02:31 -- spdk/autotest.sh@370 -- # [[ 1 -eq 1 ]] 00:06:27.515 07:02:31 -- spdk/autotest.sh@371 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:27.515 07:02:31 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:27.515 07:02:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:27.515 07:02:31 -- common/autotest_common.sh@10 -- # set +x 00:06:27.515 ************************************ 00:06:27.515 START TEST llvm_fuzz 00:06:27.515 ************************************ 00:06:27.515 07:02:32 llvm_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:27.774 * Looking for test storage... 00:06:27.774 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:06:27.774 07:02:32 llvm_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:27.774 07:02:32 llvm_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:06:27.774 07:02:32 llvm_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:27.774 07:02:32 llvm_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:27.774 07:02:32 llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.774 07:02:32 llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.774 07:02:32 llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.774 07:02:32 llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.774 07:02:32 llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.774 07:02:32 llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.774 07:02:32 llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.774 07:02:32 llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.774 07:02:32 llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.774 07:02:32 llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.774 07:02:32 llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.774 07:02:32 llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:06:27.774 07:02:32 llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:06:27.774 07:02:32 llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.774 07:02:32 llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.774 07:02:32 llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:06:27.774 07:02:32 llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:06:27.774 07:02:32 llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.774 07:02:32 llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:06:27.774 07:02:32 llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.774 07:02:32 llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:06:27.774 07:02:32 llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:06:27.774 07:02:32 llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.774 07:02:32 llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:06:27.774 07:02:32 llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.774 07:02:32 llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.774 07:02:32 llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.774 07:02:32 llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:06:27.774 07:02:32 llvm_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.774 07:02:32 llvm_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:27.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.774 --rc genhtml_branch_coverage=1 00:06:27.774 --rc genhtml_function_coverage=1 00:06:27.774 --rc genhtml_legend=1 00:06:27.774 --rc geninfo_all_blocks=1 00:06:27.774 --rc geninfo_unexecuted_blocks=1 00:06:27.774 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:27.774 ' 00:06:27.774 07:02:32 llvm_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:27.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.774 --rc genhtml_branch_coverage=1 00:06:27.774 --rc genhtml_function_coverage=1 00:06:27.774 --rc genhtml_legend=1 00:06:27.774 --rc geninfo_all_blocks=1 00:06:27.774 --rc geninfo_unexecuted_blocks=1 00:06:27.774 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:27.774 ' 00:06:27.774 07:02:32 llvm_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:27.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.774 --rc genhtml_branch_coverage=1 00:06:27.774 --rc genhtml_function_coverage=1 00:06:27.774 --rc genhtml_legend=1 00:06:27.774 --rc geninfo_all_blocks=1 00:06:27.774 --rc geninfo_unexecuted_blocks=1 00:06:27.774 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:27.774 ' 00:06:27.774 07:02:32 llvm_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:27.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.774 --rc genhtml_branch_coverage=1 00:06:27.774 --rc genhtml_function_coverage=1 00:06:27.774 --rc genhtml_legend=1 00:06:27.774 --rc geninfo_all_blocks=1 00:06:27.774 --rc geninfo_unexecuted_blocks=1 00:06:27.774 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:27.774 ' 00:06:27.774 07:02:32 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:06:27.774 07:02:32 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:06:27.774 07:02:32 llvm_fuzz -- common/autotest_common.sh@548 -- # fuzzers=() 00:06:27.774 07:02:32 llvm_fuzz -- common/autotest_common.sh@548 -- # local fuzzers 00:06:27.774 07:02:32 llvm_fuzz -- common/autotest_common.sh@550 -- # [[ -n '' ]] 00:06:27.774 07:02:32 llvm_fuzz -- common/autotest_common.sh@553 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:06:27.774 07:02:32 llvm_fuzz -- common/autotest_common.sh@554 -- # fuzzers=("${fuzzers[@]##*/}") 00:06:27.774 07:02:32 llvm_fuzz -- common/autotest_common.sh@557 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:06:27.774 07:02:32 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:06:27.774 07:02:32 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:06:27.774 07:02:32 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:06:27.774 07:02:32 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:06:27.774 07:02:32 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:06:27.774 07:02:32 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:06:27.774 07:02:32 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:06:27.774 07:02:32 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:06:27.774 07:02:32 llvm_fuzz -- fuzz/llvm.sh@19 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:27.774 07:02:32 llvm_fuzz -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:27.774 07:02:32 llvm_fuzz -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:27.774 07:02:32 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:06:27.774 ************************************ 00:06:27.774 START TEST nvmf_llvm_fuzz 00:06:27.774 ************************************ 00:06:27.774 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:27.774 * Looking for test storage... 00:06:27.774 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:27.774 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:28.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.037 --rc genhtml_branch_coverage=1 00:06:28.037 --rc genhtml_function_coverage=1 00:06:28.037 --rc genhtml_legend=1 00:06:28.037 --rc geninfo_all_blocks=1 00:06:28.037 --rc geninfo_unexecuted_blocks=1 00:06:28.037 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:28.037 ' 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:28.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.037 --rc genhtml_branch_coverage=1 00:06:28.037 --rc genhtml_function_coverage=1 00:06:28.037 --rc genhtml_legend=1 00:06:28.037 --rc geninfo_all_blocks=1 00:06:28.037 --rc geninfo_unexecuted_blocks=1 00:06:28.037 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:28.037 ' 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:28.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.037 --rc genhtml_branch_coverage=1 00:06:28.037 --rc genhtml_function_coverage=1 00:06:28.037 --rc genhtml_legend=1 00:06:28.037 --rc geninfo_all_blocks=1 00:06:28.037 --rc geninfo_unexecuted_blocks=1 00:06:28.037 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:28.037 ' 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:28.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.037 --rc genhtml_branch_coverage=1 00:06:28.037 --rc genhtml_function_coverage=1 00:06:28.037 --rc genhtml_legend=1 00:06:28.037 --rc geninfo_all_blocks=1 00:06:28.037 --rc geninfo_unexecuted_blocks=1 00:06:28.037 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:28.037 ' 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_CET=n 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:06:28.037 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FUZZER=y 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_SHARED=n 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_FC=n 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@90 -- # CONFIG_URING=n 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:06:28.038 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:28.038 #define SPDK_CONFIG_H 00:06:28.038 #define SPDK_CONFIG_AIO_FSDEV 1 00:06:28.038 #define SPDK_CONFIG_APPS 1 00:06:28.038 #define SPDK_CONFIG_ARCH native 00:06:28.038 #undef SPDK_CONFIG_ASAN 00:06:28.038 #undef SPDK_CONFIG_AVAHI 00:06:28.038 #undef SPDK_CONFIG_CET 00:06:28.038 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:06:28.038 #define SPDK_CONFIG_COVERAGE 1 00:06:28.038 #define SPDK_CONFIG_CROSS_PREFIX 00:06:28.038 #undef SPDK_CONFIG_CRYPTO 00:06:28.038 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:28.038 #undef SPDK_CONFIG_CUSTOMOCF 00:06:28.038 #undef SPDK_CONFIG_DAOS 00:06:28.038 #define SPDK_CONFIG_DAOS_DIR 00:06:28.038 #define SPDK_CONFIG_DEBUG 1 00:06:28.038 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:28.038 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:28.038 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:28.038 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:28.038 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:28.038 #undef SPDK_CONFIG_DPDK_UADK 00:06:28.038 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:28.038 #define SPDK_CONFIG_EXAMPLES 1 00:06:28.038 #undef SPDK_CONFIG_FC 00:06:28.038 #define SPDK_CONFIG_FC_PATH 00:06:28.038 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:28.038 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:28.038 #define SPDK_CONFIG_FSDEV 1 00:06:28.038 #undef SPDK_CONFIG_FUSE 00:06:28.038 #define SPDK_CONFIG_FUZZER 1 00:06:28.038 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:06:28.038 #undef SPDK_CONFIG_GOLANG 00:06:28.038 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:28.038 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:28.038 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:28.038 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:28.038 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:28.038 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:28.038 #undef SPDK_CONFIG_HAVE_LZ4 00:06:28.038 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:06:28.038 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:06:28.038 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:28.038 #define SPDK_CONFIG_IDXD 1 00:06:28.038 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:28.038 #undef SPDK_CONFIG_IPSEC_MB 00:06:28.038 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:28.038 #define SPDK_CONFIG_ISAL 1 00:06:28.038 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:28.038 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:28.038 #define SPDK_CONFIG_LIBDIR 00:06:28.038 #undef SPDK_CONFIG_LTO 00:06:28.038 #define SPDK_CONFIG_MAX_LCORES 128 00:06:28.038 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:06:28.038 #define SPDK_CONFIG_NVME_CUSE 1 00:06:28.038 #undef SPDK_CONFIG_OCF 00:06:28.038 #define SPDK_CONFIG_OCF_PATH 00:06:28.038 #define SPDK_CONFIG_OPENSSL_PATH 00:06:28.038 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:28.038 #define SPDK_CONFIG_PGO_DIR 00:06:28.038 #undef SPDK_CONFIG_PGO_USE 00:06:28.038 #define SPDK_CONFIG_PREFIX /usr/local 00:06:28.038 #undef SPDK_CONFIG_RAID5F 00:06:28.038 #undef SPDK_CONFIG_RBD 00:06:28.038 #define SPDK_CONFIG_RDMA 1 00:06:28.038 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:28.038 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:28.038 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:28.038 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:28.038 #undef SPDK_CONFIG_SHARED 00:06:28.038 #undef SPDK_CONFIG_SMA 00:06:28.038 #define SPDK_CONFIG_TESTS 1 00:06:28.038 #undef SPDK_CONFIG_TSAN 00:06:28.038 #define SPDK_CONFIG_UBLK 1 00:06:28.038 #define SPDK_CONFIG_UBSAN 1 00:06:28.038 #undef SPDK_CONFIG_UNIT_TESTS 00:06:28.038 #undef SPDK_CONFIG_URING 00:06:28.038 #define SPDK_CONFIG_URING_PATH 00:06:28.038 #undef SPDK_CONFIG_URING_ZNS 00:06:28.038 #undef SPDK_CONFIG_USDT 00:06:28.038 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:28.038 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:28.038 #define SPDK_CONFIG_VFIO_USER 1 00:06:28.039 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:28.039 #define SPDK_CONFIG_VHOST 1 00:06:28.039 #define SPDK_CONFIG_VIRTIO 1 00:06:28.039 #undef SPDK_CONFIG_VTUNE 00:06:28.039 #define SPDK_CONFIG_VTUNE_DIR 00:06:28.039 #define SPDK_CONFIG_WERROR 1 00:06:28.039 #define SPDK_CONFIG_WPDK_DIR 00:06:28.039 #undef SPDK_CONFIG_XNVME 00:06:28.039 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:28.039 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:28.040 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@204 -- # cat 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV= 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ 1 -eq 1 ]] 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV=1 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@273 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@277 -- # export valgrind= 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@277 -- # valgrind= 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@283 -- # uname -s 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # MAKE=make 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j112 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@307 -- # TEST_MODE= 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@329 -- # [[ -z 3774133 ]] 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@329 -- # kill -0 3774133 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@342 -- # local mount target_dir 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.ApWGn3 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.ApWGn3/tests/nvmf /tmp/spdk.ApWGn3 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # df -T 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=54019014656 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=61730607104 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=7711592448 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=30861873152 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=30865301504 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=3428352 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=12340121600 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=12346122240 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=6000640 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=30864785408 00:06:28.041 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=30865305600 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=520192 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=6173044736 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=6173057024 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:06:28.042 * Looking for test storage... 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@379 -- # local target_space new_size 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # mount=/ 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # target_space=54019014656 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@392 -- # new_size=9926184960 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:28.042 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # return 0 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1678 -- # set -o errtrace 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1683 -- # true 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1685 -- # xtrace_fd 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:28.042 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:06:28.301 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:28.301 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:28.301 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.301 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.301 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.301 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:28.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.302 --rc genhtml_branch_coverage=1 00:06:28.302 --rc genhtml_function_coverage=1 00:06:28.302 --rc genhtml_legend=1 00:06:28.302 --rc geninfo_all_blocks=1 00:06:28.302 --rc geninfo_unexecuted_blocks=1 00:06:28.302 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:28.302 ' 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:28.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.302 --rc genhtml_branch_coverage=1 00:06:28.302 --rc genhtml_function_coverage=1 00:06:28.302 --rc genhtml_legend=1 00:06:28.302 --rc geninfo_all_blocks=1 00:06:28.302 --rc geninfo_unexecuted_blocks=1 00:06:28.302 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:28.302 ' 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:28.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.302 --rc genhtml_branch_coverage=1 00:06:28.302 --rc genhtml_function_coverage=1 00:06:28.302 --rc genhtml_legend=1 00:06:28.302 --rc geninfo_all_blocks=1 00:06:28.302 --rc geninfo_unexecuted_blocks=1 00:06:28.302 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:28.302 ' 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:28.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.302 --rc genhtml_branch_coverage=1 00:06:28.302 --rc genhtml_function_coverage=1 00:06:28.302 --rc genhtml_legend=1 00:06:28.302 --rc geninfo_all_blocks=1 00:06:28.302 --rc geninfo_unexecuted_blocks=1 00:06:28.302 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:28.302 ' 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:28.302 07:02:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:06:28.302 [2024-11-20 07:02:32.733955] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:06:28.302 [2024-11-20 07:02:32.734029] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3774239 ] 00:06:28.560 [2024-11-20 07:02:33.001462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.560 [2024-11-20 07:02:33.053414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.560 [2024-11-20 07:02:33.112902] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:28.818 [2024-11-20 07:02:33.129234] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:06:28.818 INFO: Running with entropic power schedule (0xFF, 100). 00:06:28.818 INFO: Seed: 3052698699 00:06:28.818 INFO: Loaded 1 modules (388174 inline 8-bit counters): 388174 [0x2c4d34c, 0x2cabf9a), 00:06:28.818 INFO: Loaded 1 PC tables (388174 PCs): 388174 [0x2cabfa0,0x3298480), 00:06:28.818 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:28.818 INFO: A corpus is not provided, starting from an empty corpus 00:06:28.818 #2 INITED exec/s: 0 rss: 65Mb 00:06:28.818 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:28.818 This may also happen if the target rejected all inputs we tried so far 00:06:28.818 [2024-11-20 07:02:33.199946] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:28.818 [2024-11-20 07:02:33.199986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.076 NEW_FUNC[1/714]: 0x43bbc8 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:06:29.076 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:29.076 #34 NEW cov: 12170 ft: 12189 corp: 2/113b lim: 320 exec/s: 0 rss: 73Mb L: 112/112 MS: 2 CopyPart-InsertRepeatedBytes- 00:06:29.076 [2024-11-20 07:02:33.540036] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x94949494949494ff 00:06:29.076 [2024-11-20 07:02:33.540085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.076 NEW_FUNC[1/1]: 0x1c33c58 in _reactor_run /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:962 00:06:29.076 #35 NEW cov: 12302 ft: 12957 corp: 3/236b lim: 320 exec/s: 0 rss: 73Mb L: 123/123 MS: 1 InsertRepeatedBytes- 00:06:29.076 [2024-11-20 07:02:33.610118] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x94949494949494ff 00:06:29.076 [2024-11-20 07:02:33.610149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.335 #36 NEW cov: 12308 ft: 13139 corp: 4/359b lim: 320 exec/s: 0 rss: 73Mb L: 123/123 MS: 1 ShuffleBytes- 00:06:29.335 [2024-11-20 07:02:33.680234] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x94949494949494ff 00:06:29.335 [2024-11-20 07:02:33.680263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.335 #42 NEW cov: 12393 ft: 13484 corp: 5/482b lim: 320 exec/s: 0 rss: 73Mb L: 123/123 MS: 1 ChangeByte- 00:06:29.335 [2024-11-20 07:02:33.750465] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:29.335 [2024-11-20 07:02:33.750502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.335 #43 NEW cov: 12393 ft: 13580 corp: 6/594b lim: 320 exec/s: 0 rss: 73Mb L: 112/123 MS: 1 ChangeBinInt- 00:06:29.335 [2024-11-20 07:02:33.800629] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:29.335 [2024-11-20 07:02:33.800658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.335 #45 NEW cov: 12393 ft: 13637 corp: 7/658b lim: 320 exec/s: 0 rss: 73Mb L: 64/123 MS: 2 EraseBytes-InsertByte- 00:06:29.335 [2024-11-20 07:02:33.850811] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:29.335 [2024-11-20 07:02:33.850839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.335 #46 NEW cov: 12393 ft: 13681 corp: 8/770b lim: 320 exec/s: 0 rss: 73Mb L: 112/123 MS: 1 ShuffleBytes- 00:06:29.593 [2024-11-20 07:02:33.901343] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:29.593 [2024-11-20 07:02:33.901371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.593 [2024-11-20 07:02:33.901519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:29.593 [2024-11-20 07:02:33.901535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.593 [2024-11-20 07:02:33.901666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:29.593 [2024-11-20 07:02:33.901685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.593 NEW_FUNC[1/1]: 0x1530678 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2213 00:06:29.593 #47 NEW cov: 12436 ft: 14012 corp: 9/966b lim: 320 exec/s: 0 rss: 73Mb L: 196/196 MS: 1 CopyPart- 00:06:29.593 [2024-11-20 07:02:33.971379] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:29.593 [2024-11-20 07:02:33.971406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.593 [2024-11-20 07:02:33.971512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff 00:06:29.593 [2024-11-20 07:02:33.971528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.593 #48 NEW cov: 12447 ft: 14241 corp: 10/1097b lim: 320 exec/s: 0 rss: 74Mb L: 131/196 MS: 1 CopyPart- 00:06:29.593 [2024-11-20 07:02:34.041406] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x94949494949494ff 00:06:29.593 [2024-11-20 07:02:34.041436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.593 NEW_FUNC[1/1]: 0x1c34f28 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:29.593 #49 NEW cov: 12470 ft: 14294 corp: 11/1185b lim: 320 exec/s: 0 rss: 74Mb L: 88/196 MS: 1 EraseBytes- 00:06:29.593 [2024-11-20 07:02:34.091507] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x94949494949494ff 00:06:29.593 [2024-11-20 07:02:34.091539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.593 #50 NEW cov: 12470 ft: 14329 corp: 12/1312b lim: 320 exec/s: 0 rss: 74Mb L: 127/196 MS: 1 CMP- DE: "\027\001\000\000"- 00:06:29.903 [2024-11-20 07:02:34.161751] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:29.903 [2024-11-20 07:02:34.161781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.903 #51 NEW cov: 12470 ft: 14370 corp: 13/1424b lim: 320 exec/s: 51 rss: 74Mb L: 112/196 MS: 1 ShuffleBytes- 00:06:29.903 [2024-11-20 07:02:34.211860] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x94949494949494ff 00:06:29.903 [2024-11-20 07:02:34.211891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.903 #52 NEW cov: 12470 ft: 14444 corp: 14/1551b lim: 320 exec/s: 52 rss: 74Mb L: 127/196 MS: 1 ChangeByte- 00:06:29.903 [2024-11-20 07:02:34.282297] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x94949494949494ff 00:06:29.903 [2024-11-20 07:02:34.282327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.903 [2024-11-20 07:02:34.282467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:29.903 [2024-11-20 07:02:34.282484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.903 #53 NEW cov: 12470 ft: 14522 corp: 15/1682b lim: 320 exec/s: 53 rss: 74Mb L: 131/196 MS: 1 CMP- DE: "\000\000\000\000\000\000\000?"- 00:06:29.903 [2024-11-20 07:02:34.332702] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:29.903 [2024-11-20 07:02:34.332731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.903 [2024-11-20 07:02:34.332860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:8affffff cdw11:8a8a8a8a SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:29.903 [2024-11-20 07:02:34.332878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.903 [2024-11-20 07:02:34.333008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (8a) qid:0 cid:6 nsid:8a8a8a8a cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:29.903 [2024-11-20 07:02:34.333025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.903 #54 NEW cov: 12471 ft: 14614 corp: 16/1912b lim: 320 exec/s: 54 rss: 74Mb L: 230/230 MS: 1 InsertRepeatedBytes- 00:06:29.903 [2024-11-20 07:02:34.402840] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x94949494949494ff 00:06:29.903 [2024-11-20 07:02:34.402871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.903 [2024-11-20 07:02:34.403010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.903 [2024-11-20 07:02:34.403028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.903 [2024-11-20 07:02:34.403144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:29.903 [2024-11-20 07:02:34.403162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.201 #55 NEW cov: 12471 ft: 14629 corp: 17/2162b lim: 320 exec/s: 55 rss: 74Mb L: 250/250 MS: 1 InsertRepeatedBytes- 00:06:30.201 [2024-11-20 07:02:34.472950] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x94949494949494ff 00:06:30.201 [2024-11-20 07:02:34.472980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.201 [2024-11-20 07:02:34.473124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:30.201 [2024-11-20 07:02:34.473142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.201 #56 NEW cov: 12471 ft: 14643 corp: 18/2293b lim: 320 exec/s: 56 rss: 74Mb L: 131/250 MS: 1 ChangeBinInt- 00:06:30.201 [2024-11-20 07:02:34.523203] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x94949494949494ff 00:06:30.201 [2024-11-20 07:02:34.523233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.201 [2024-11-20 07:02:34.523372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:2b2b2b2b cdw11:2b2b2b2b SGL TRANSPORT DATA BLOCK TRANSPORT 0x2b2b2b2b2b2b2b2b 00:06:30.201 [2024-11-20 07:02:34.523392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.201 [2024-11-20 07:02:34.523534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2b) qid:0 cid:6 nsid:2b2b2b2b cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x2b2b2b2b2b2b2b2b 00:06:30.201 [2024-11-20 07:02:34.523552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.201 #57 NEW cov: 12471 ft: 14671 corp: 19/2497b lim: 320 exec/s: 57 rss: 74Mb L: 204/250 MS: 1 InsertRepeatedBytes- 00:06:30.201 [2024-11-20 07:02:34.573369] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:30.201 [2024-11-20 07:02:34.573398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.201 [2024-11-20 07:02:34.573534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xff0004ffffffffff 00:06:30.201 [2024-11-20 07:02:34.573551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.201 [2024-11-20 07:02:34.573694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:30.201 [2024-11-20 07:02:34.573712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.201 #58 NEW cov: 12471 ft: 14687 corp: 20/2693b lim: 320 exec/s: 58 rss: 74Mb L: 196/250 MS: 1 ChangeBinInt- 00:06:30.201 [2024-11-20 07:02:34.623577] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:30.201 [2024-11-20 07:02:34.623611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.201 [2024-11-20 07:02:34.623725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:30.201 [2024-11-20 07:02:34.623742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.201 [2024-11-20 07:02:34.623882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:30.201 [2024-11-20 07:02:34.623903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.201 #59 NEW cov: 12471 ft: 14711 corp: 21/2889b lim: 320 exec/s: 59 rss: 74Mb L: 196/250 MS: 1 InsertRepeatedBytes- 00:06:30.201 [2024-11-20 07:02:34.673697] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x94949494949494ff 00:06:30.201 [2024-11-20 07:02:34.673726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.201 [2024-11-20 07:02:34.673871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffff0000 cdw10:ffffffff cdw11:b5ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:30.201 [2024-11-20 07:02:34.673890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.201 [2024-11-20 07:02:34.674019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (b5) qid:0 cid:6 nsid:b5b5b5b5 cdw10:b5b5b5b5 cdw11:b5b5b5b5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.201 [2024-11-20 07:02:34.674047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.201 NEW_FUNC[1/1]: 0x1966e38 in nvme_get_sgl_unkeyed /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:143 00:06:30.201 #60 NEW cov: 12484 ft: 15027 corp: 22/3082b lim: 320 exec/s: 60 rss: 74Mb L: 193/250 MS: 1 InsertRepeatedBytes- 00:06:30.201 [2024-11-20 07:02:34.744048] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x94949494949494ff 00:06:30.201 [2024-11-20 07:02:34.744077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.201 [2024-11-20 07:02:34.744216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffff0000 cdw10:ffffffff cdw11:b5ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:30.201 [2024-11-20 07:02:34.744234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.202 [2024-11-20 07:02:34.744372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (b5) qid:0 cid:6 nsid:b5b5b5b5 cdw10:b5b5b5b5 cdw11:b5b5b5b5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.202 [2024-11-20 07:02:34.744389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.460 #61 NEW cov: 12484 ft: 15045 corp: 23/3275b lim: 320 exec/s: 61 rss: 74Mb L: 193/250 MS: 1 ChangeBit- 00:06:30.460 [2024-11-20 07:02:34.813711] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:30.460 [2024-11-20 07:02:34.813740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.460 #64 NEW cov: 12484 ft: 15072 corp: 24/3353b lim: 320 exec/s: 64 rss: 74Mb L: 78/250 MS: 3 InsertRepeatedBytes-EraseBytes-InsertRepeatedBytes- 00:06:30.460 [2024-11-20 07:02:34.864255] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:30.460 [2024-11-20 07:02:34.864284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.460 [2024-11-20 07:02:34.864427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:8affffff cdw11:8a8a8a8a SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:30.460 [2024-11-20 07:02:34.864445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.460 [2024-11-20 07:02:34.864580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (8a) qid:0 cid:6 nsid:8a8a8a8a cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:30.460 [2024-11-20 07:02:34.864603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.460 #65 NEW cov: 12484 ft: 15084 corp: 25/3583b lim: 320 exec/s: 65 rss: 74Mb L: 230/250 MS: 1 ChangeBinInt- 00:06:30.460 [2024-11-20 07:02:34.934426] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:30.460 [2024-11-20 07:02:34.934455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.460 [2024-11-20 07:02:34.934603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffff9494949494 00:06:30.460 [2024-11-20 07:02:34.934619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.460 [2024-11-20 07:02:34.934769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:30.460 [2024-11-20 07:02:34.934786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.460 #66 NEW cov: 12484 ft: 15127 corp: 26/3825b lim: 320 exec/s: 66 rss: 74Mb L: 242/250 MS: 1 CrossOver- 00:06:30.460 [2024-11-20 07:02:35.004703] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x94949494949494ff 00:06:30.460 [2024-11-20 07:02:35.004731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.460 [2024-11-20 07:02:35.004868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.460 [2024-11-20 07:02:35.004886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.460 [2024-11-20 07:02:35.005003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:30.460 [2024-11-20 07:02:35.005023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.719 #67 NEW cov: 12484 ft: 15132 corp: 27/4075b lim: 320 exec/s: 67 rss: 75Mb L: 250/250 MS: 1 ChangeBinInt- 00:06:30.719 [2024-11-20 07:02:35.074818] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:30.719 [2024-11-20 07:02:35.074848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.719 [2024-11-20 07:02:35.074995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffff0000 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xff0004ffffffffff 00:06:30.719 [2024-11-20 07:02:35.075014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.719 [2024-11-20 07:02:35.075148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:30.719 [2024-11-20 07:02:35.075164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.719 [2024-11-20 07:02:35.145229] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffff0aff 00:06:30.720 [2024-11-20 07:02:35.145256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.720 [2024-11-20 07:02:35.145399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xff00000117ffffff 00:06:30.720 [2024-11-20 07:02:35.145416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.720 [2024-11-20 07:02:35.145547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:30.720 [2024-11-20 07:02:35.145563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.720 [2024-11-20 07:02:35.145685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:7 nsid:ffffff00 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:30.720 [2024-11-20 07:02:35.145702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.720 #69 NEW cov: 12484 ft: 15355 corp: 28/4368b lim: 320 exec/s: 69 rss: 75Mb L: 293/293 MS: 2 PersAutoDict-CrossOver- DE: "\027\001\000\000"- 00:06:30.720 [2024-11-20 07:02:35.194791] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffb6ffff 00:06:30.720 [2024-11-20 07:02:35.194820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.720 #70 NEW cov: 12484 ft: 15377 corp: 29/4481b lim: 320 exec/s: 35 rss: 75Mb L: 113/293 MS: 1 InsertByte- 00:06:30.720 #70 DONE cov: 12484 ft: 15377 corp: 29/4481b lim: 320 exec/s: 35 rss: 75Mb 00:06:30.720 ###### Recommended dictionary. ###### 00:06:30.720 "\027\001\000\000" # Uses: 1 00:06:30.720 "\000\000\000\000\000\000\000?" # Uses: 0 00:06:30.720 ###### End of recommended dictionary. ###### 00:06:30.720 Done 70 runs in 2 second(s) 00:06:30.979 07:02:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:06:30.979 07:02:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:30.979 07:02:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:30.979 07:02:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:06:30.979 07:02:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:06:30.979 07:02:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:30.979 07:02:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:30.979 07:02:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:30.979 07:02:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:06:30.979 07:02:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:30.979 07:02:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:30.979 07:02:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:06:30.979 07:02:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:06:30.979 07:02:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:30.979 07:02:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:06:30.979 07:02:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:30.979 07:02:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:30.979 07:02:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:30.979 07:02:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:06:30.979 [2024-11-20 07:02:35.369795] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:06:30.979 [2024-11-20 07:02:35.369878] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3774728 ] 00:06:31.238 [2024-11-20 07:02:35.630312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.238 [2024-11-20 07:02:35.690995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.238 [2024-11-20 07:02:35.750620] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:31.238 [2024-11-20 07:02:35.766963] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:06:31.238 INFO: Running with entropic power schedule (0xFF, 100). 00:06:31.238 INFO: Seed: 1396741739 00:06:31.497 INFO: Loaded 1 modules (388174 inline 8-bit counters): 388174 [0x2c4d34c, 0x2cabf9a), 00:06:31.497 INFO: Loaded 1 PC tables (388174 PCs): 388174 [0x2cabfa0,0x3298480), 00:06:31.497 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:31.497 INFO: A corpus is not provided, starting from an empty corpus 00:06:31.497 #2 INITED exec/s: 0 rss: 66Mb 00:06:31.497 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:31.497 This may also happen if the target rejected all inputs we tried so far 00:06:31.497 [2024-11-20 07:02:35.812187] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:31.497 [2024-11-20 07:02:35.812306] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:31.497 [2024-11-20 07:02:35.812411] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:31.497 [2024-11-20 07:02:35.812615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.497 [2024-11-20 07:02:35.812645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.497 [2024-11-20 07:02:35.812700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.497 [2024-11-20 07:02:35.812714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.497 [2024-11-20 07:02:35.812767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.497 [2024-11-20 07:02:35.812781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.756 NEW_FUNC[1/716]: 0x43c4c8 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:06:31.756 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:31.756 #6 NEW cov: 12272 ft: 12270 corp: 2/24b lim: 30 exec/s: 0 rss: 73Mb L: 23/23 MS: 4 ChangeBit-ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:06:31.756 [2024-11-20 07:02:36.132989] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10340) > buf size (4096) 00:06:31.756 [2024-11-20 07:02:36.133232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a180018 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.756 [2024-11-20 07:02:36.133264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.756 #7 NEW cov: 12408 ft: 13314 corp: 3/33b lim: 30 exec/s: 0 rss: 73Mb L: 9/23 MS: 1 InsertRepeatedBytes- 00:06:31.756 [2024-11-20 07:02:36.173075] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:31.756 [2024-11-20 07:02:36.173206] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:31.756 [2024-11-20 07:02:36.173326] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:31.756 [2024-11-20 07:02:36.173551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffef83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.756 [2024-11-20 07:02:36.173579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.756 [2024-11-20 07:02:36.173641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.756 [2024-11-20 07:02:36.173656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.756 [2024-11-20 07:02:36.173712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.756 [2024-11-20 07:02:36.173726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.756 #13 NEW cov: 12414 ft: 13549 corp: 4/56b lim: 30 exec/s: 0 rss: 73Mb L: 23/23 MS: 1 ChangeBit- 00:06:31.756 [2024-11-20 07:02:36.233256] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:31.756 [2024-11-20 07:02:36.233381] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:31.756 [2024-11-20 07:02:36.233499] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:31.756 [2024-11-20 07:02:36.233727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.756 [2024-11-20 07:02:36.233754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.756 [2024-11-20 07:02:36.233815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.756 [2024-11-20 07:02:36.233829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.756 [2024-11-20 07:02:36.233885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.756 [2024-11-20 07:02:36.233899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.756 #14 NEW cov: 12499 ft: 13882 corp: 5/79b lim: 30 exec/s: 0 rss: 73Mb L: 23/23 MS: 1 ShuffleBytes- 00:06:31.756 [2024-11-20 07:02:36.273371] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:31.756 [2024-11-20 07:02:36.273496] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:31.756 [2024-11-20 07:02:36.273626] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:31.756 [2024-11-20 07:02:36.273858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.756 [2024-11-20 07:02:36.273885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.756 [2024-11-20 07:02:36.273944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.756 [2024-11-20 07:02:36.273959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.756 [2024-11-20 07:02:36.274016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:faff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.756 [2024-11-20 07:02:36.274034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.015 #15 NEW cov: 12499 ft: 14010 corp: 6/102b lim: 30 exec/s: 0 rss: 73Mb L: 23/23 MS: 1 ChangeBinInt- 00:06:32.015 [2024-11-20 07:02:36.333404] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10340) > buf size (4096) 00:06:32.015 [2024-11-20 07:02:36.333614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a180018 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.015 [2024-11-20 07:02:36.333640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.015 #16 NEW cov: 12499 ft: 14150 corp: 7/112b lim: 30 exec/s: 0 rss: 74Mb L: 10/23 MS: 1 InsertByte- 00:06:32.015 [2024-11-20 07:02:36.393695] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300007a7a 00:06:32.015 [2024-11-20 07:02:36.393818] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.015 [2024-11-20 07:02:36.393946] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.015 [2024-11-20 07:02:36.394058] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.016 [2024-11-20 07:02:36.394283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.016 [2024-11-20 07:02:36.394309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.016 [2024-11-20 07:02:36.394369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7a7a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.016 [2024-11-20 07:02:36.394383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.016 [2024-11-20 07:02:36.394442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.016 [2024-11-20 07:02:36.394456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.016 [2024-11-20 07:02:36.394514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.016 [2024-11-20 07:02:36.394528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.016 #17 NEW cov: 12499 ft: 14680 corp: 8/139b lim: 30 exec/s: 0 rss: 74Mb L: 27/27 MS: 1 InsertRepeatedBytes- 00:06:32.016 [2024-11-20 07:02:36.433741] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.016 [2024-11-20 07:02:36.433865] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.016 [2024-11-20 07:02:36.433981] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.016 [2024-11-20 07:02:36.434222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.016 [2024-11-20 07:02:36.434249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.016 [2024-11-20 07:02:36.434308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.016 [2024-11-20 07:02:36.434322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.016 [2024-11-20 07:02:36.434382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.016 [2024-11-20 07:02:36.434396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.016 #18 NEW cov: 12499 ft: 14803 corp: 9/162b lim: 30 exec/s: 0 rss: 74Mb L: 23/27 MS: 1 ShuffleBytes- 00:06:32.016 [2024-11-20 07:02:36.473921] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300007a7a 00:06:32.016 [2024-11-20 07:02:36.474047] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.016 [2024-11-20 07:02:36.474165] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.016 [2024-11-20 07:02:36.474275] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009fff 00:06:32.016 [2024-11-20 07:02:36.474505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.016 [2024-11-20 07:02:36.474533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.016 [2024-11-20 07:02:36.474594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7a7a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.016 [2024-11-20 07:02:36.474613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.016 [2024-11-20 07:02:36.474676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.016 [2024-11-20 07:02:36.474690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.016 [2024-11-20 07:02:36.474747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.016 [2024-11-20 07:02:36.474761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.016 #19 NEW cov: 12499 ft: 14831 corp: 10/190b lim: 30 exec/s: 0 rss: 74Mb L: 28/28 MS: 1 InsertByte- 00:06:32.016 [2024-11-20 07:02:36.534072] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.016 [2024-11-20 07:02:36.534194] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.016 [2024-11-20 07:02:36.534307] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.016 [2024-11-20 07:02:36.534421] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff42 00:06:32.016 [2024-11-20 07:02:36.534645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.016 [2024-11-20 07:02:36.534672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.016 [2024-11-20 07:02:36.534732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.016 [2024-11-20 07:02:36.534747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.016 [2024-11-20 07:02:36.534807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.016 [2024-11-20 07:02:36.534821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.016 [2024-11-20 07:02:36.534879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff8332 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.016 [2024-11-20 07:02:36.534893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.016 #20 NEW cov: 12499 ft: 14873 corp: 11/214b lim: 30 exec/s: 0 rss: 74Mb L: 24/28 MS: 1 InsertByte- 00:06:32.276 [2024-11-20 07:02:36.574205] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.276 [2024-11-20 07:02:36.574332] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:06:32.276 [2024-11-20 07:02:36.574448] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.276 [2024-11-20 07:02:36.574558] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff42 00:06:32.276 [2024-11-20 07:02:36.574791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.276 [2024-11-20 07:02:36.574818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.276 [2024-11-20 07:02:36.574878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.276 [2024-11-20 07:02:36.574893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.276 [2024-11-20 07:02:36.574955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.276 [2024-11-20 07:02:36.574969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.276 [2024-11-20 07:02:36.575028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff8332 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.276 [2024-11-20 07:02:36.575042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.276 #21 NEW cov: 12499 ft: 14901 corp: 12/238b lim: 30 exec/s: 0 rss: 74Mb L: 24/28 MS: 1 ChangeBinInt- 00:06:32.276 [2024-11-20 07:02:36.634364] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.276 [2024-11-20 07:02:36.634486] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:06:32.276 [2024-11-20 07:02:36.634611] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.276 [2024-11-20 07:02:36.634727] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff42 00:06:32.276 [2024-11-20 07:02:36.634960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.276 [2024-11-20 07:02:36.634987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.276 [2024-11-20 07:02:36.635047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.276 [2024-11-20 07:02:36.635078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.276 [2024-11-20 07:02:36.635137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.276 [2024-11-20 07:02:36.635151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.276 [2024-11-20 07:02:36.635210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff8331 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.276 [2024-11-20 07:02:36.635223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.276 #22 NEW cov: 12499 ft: 14925 corp: 13/262b lim: 30 exec/s: 0 rss: 74Mb L: 24/28 MS: 1 ChangeASCIIInt- 00:06:32.276 [2024-11-20 07:02:36.694412] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:06:32.276 [2024-11-20 07:02:36.694637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a180018 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.276 [2024-11-20 07:02:36.694663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.276 NEW_FUNC[1/1]: 0x1c34f28 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:32.276 #23 NEW cov: 12522 ft: 15001 corp: 14/271b lim: 30 exec/s: 0 rss: 74Mb L: 9/28 MS: 1 CrossOver- 00:06:32.276 [2024-11-20 07:02:36.734647] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.276 [2024-11-20 07:02:36.734773] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.276 [2024-11-20 07:02:36.734887] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.276 [2024-11-20 07:02:36.735110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.276 [2024-11-20 07:02:36.735135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.276 [2024-11-20 07:02:36.735212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.276 [2024-11-20 07:02:36.735227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.276 [2024-11-20 07:02:36.735286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.276 [2024-11-20 07:02:36.735301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.276 #24 NEW cov: 12522 ft: 15052 corp: 15/294b lim: 30 exec/s: 0 rss: 74Mb L: 23/28 MS: 1 ShuffleBytes- 00:06:32.276 [2024-11-20 07:02:36.794824] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.276 [2024-11-20 07:02:36.794945] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.276 [2024-11-20 07:02:36.795059] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.276 [2024-11-20 07:02:36.795175] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff42 00:06:32.276 [2024-11-20 07:02:36.795400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.276 [2024-11-20 07:02:36.795426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.276 [2024-11-20 07:02:36.795486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.276 [2024-11-20 07:02:36.795501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.276 [2024-11-20 07:02:36.795559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.276 [2024-11-20 07:02:36.795573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.276 [2024-11-20 07:02:36.795631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:bfff8332 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.276 [2024-11-20 07:02:36.795644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.276 #25 NEW cov: 12522 ft: 15075 corp: 16/318b lim: 30 exec/s: 25 rss: 74Mb L: 24/28 MS: 1 ChangeBit- 00:06:32.535 [2024-11-20 07:02:36.834868] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.535 [2024-11-20 07:02:36.834993] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.535 [2024-11-20 07:02:36.835112] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.535 [2024-11-20 07:02:36.835347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:41ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.535 [2024-11-20 07:02:36.835377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.536 [2024-11-20 07:02:36.835438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.536 [2024-11-20 07:02:36.835453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.536 [2024-11-20 07:02:36.835510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.536 [2024-11-20 07:02:36.835524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.536 #26 NEW cov: 12522 ft: 15154 corp: 17/341b lim: 30 exec/s: 26 rss: 74Mb L: 23/28 MS: 1 ChangeByte- 00:06:32.536 [2024-11-20 07:02:36.875031] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300007a7a 00:06:32.536 [2024-11-20 07:02:36.875152] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.536 [2024-11-20 07:02:36.875271] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.536 [2024-11-20 07:02:36.875384] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.536 [2024-11-20 07:02:36.875641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:fff283ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.536 [2024-11-20 07:02:36.875669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.536 [2024-11-20 07:02:36.875729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7a7a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.536 [2024-11-20 07:02:36.875744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.536 [2024-11-20 07:02:36.875800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.536 [2024-11-20 07:02:36.875814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.536 [2024-11-20 07:02:36.875872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.536 [2024-11-20 07:02:36.875886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.536 #27 NEW cov: 12522 ft: 15193 corp: 18/368b lim: 30 exec/s: 27 rss: 74Mb L: 27/28 MS: 1 ChangeByte- 00:06:32.536 [2024-11-20 07:02:36.915085] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.536 [2024-11-20 07:02:36.915207] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.536 [2024-11-20 07:02:36.915435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffef83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.536 [2024-11-20 07:02:36.915461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.536 [2024-11-20 07:02:36.915521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.536 [2024-11-20 07:02:36.915535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.536 #28 NEW cov: 12522 ft: 15488 corp: 19/382b lim: 30 exec/s: 28 rss: 74Mb L: 14/28 MS: 1 EraseBytes- 00:06:32.536 [2024-11-20 07:02:36.975275] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.536 [2024-11-20 07:02:36.975422] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.536 [2024-11-20 07:02:36.975539] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.536 [2024-11-20 07:02:36.975777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.536 [2024-11-20 07:02:36.975804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.536 [2024-11-20 07:02:36.975866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.536 [2024-11-20 07:02:36.975880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.536 [2024-11-20 07:02:36.975939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.536 [2024-11-20 07:02:36.975953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.536 #29 NEW cov: 12522 ft: 15508 corp: 20/402b lim: 30 exec/s: 29 rss: 74Mb L: 20/28 MS: 1 EraseBytes- 00:06:32.536 [2024-11-20 07:02:37.015336] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.536 [2024-11-20 07:02:37.015461] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3000032ff 00:06:32.536 [2024-11-20 07:02:37.015686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.536 [2024-11-20 07:02:37.015713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.536 [2024-11-20 07:02:37.015775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.536 [2024-11-20 07:02:37.015789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.536 #30 NEW cov: 12522 ft: 15582 corp: 21/416b lim: 30 exec/s: 30 rss: 74Mb L: 14/28 MS: 1 EraseBytes- 00:06:32.536 [2024-11-20 07:02:37.055412] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x18ff 00:06:32.536 [2024-11-20 07:02:37.055635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a180018 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.536 [2024-11-20 07:02:37.055660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.536 #31 NEW cov: 12522 ft: 15612 corp: 22/425b lim: 30 exec/s: 31 rss: 74Mb L: 9/28 MS: 1 CrossOver- 00:06:32.795 [2024-11-20 07:02:37.095643] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.795 [2024-11-20 07:02:37.095771] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.795 [2024-11-20 07:02:37.095888] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.795 [2024-11-20 07:02:37.096133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.795 [2024-11-20 07:02:37.096159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.795 [2024-11-20 07:02:37.096218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.795 [2024-11-20 07:02:37.096232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.795 [2024-11-20 07:02:37.096291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.795 [2024-11-20 07:02:37.096308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.795 #32 NEW cov: 12522 ft: 15687 corp: 23/447b lim: 30 exec/s: 32 rss: 74Mb L: 22/28 MS: 1 EraseBytes- 00:06:32.795 [2024-11-20 07:02:37.155766] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.795 [2024-11-20 07:02:37.155890] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.795 [2024-11-20 07:02:37.156110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffef83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.795 [2024-11-20 07:02:37.156136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.795 [2024-11-20 07:02:37.156197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ef cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.795 [2024-11-20 07:02:37.156211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.795 #33 NEW cov: 12522 ft: 15703 corp: 24/461b lim: 30 exec/s: 33 rss: 74Mb L: 14/28 MS: 1 ChangeBit- 00:06:32.795 [2024-11-20 07:02:37.215960] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.795 [2024-11-20 07:02:37.216089] ctrlr.c:2698:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (18080) > len (564) 00:06:32.795 [2024-11-20 07:02:37.216203] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.795 [2024-11-20 07:02:37.216426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.795 [2024-11-20 07:02:37.216452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.795 [2024-11-20 07:02:37.216511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:008c00d7 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.795 [2024-11-20 07:02:37.216524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.795 [2024-11-20 07:02:37.216584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:c08a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.795 [2024-11-20 07:02:37.216603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.795 #34 NEW cov: 12535 ft: 15747 corp: 25/483b lim: 30 exec/s: 34 rss: 75Mb L: 22/28 MS: 1 CMP- DE: "\000\214\327\360F\240\300\212"- 00:06:32.795 [2024-11-20 07:02:37.276128] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.795 [2024-11-20 07:02:37.276256] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.795 [2024-11-20 07:02:37.276387] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.795 [2024-11-20 07:02:37.276626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffef83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.795 [2024-11-20 07:02:37.276653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.795 [2024-11-20 07:02:37.276716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.795 [2024-11-20 07:02:37.276730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.795 [2024-11-20 07:02:37.276791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.795 [2024-11-20 07:02:37.276808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.795 #35 NEW cov: 12535 ft: 15757 corp: 26/506b lim: 30 exec/s: 35 rss: 75Mb L: 23/28 MS: 1 ChangeBinInt- 00:06:32.795 [2024-11-20 07:02:37.316183] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.795 [2024-11-20 07:02:37.316311] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:32.796 [2024-11-20 07:02:37.316541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffef83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.796 [2024-11-20 07:02:37.316567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.796 [2024-11-20 07:02:37.316632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.796 [2024-11-20 07:02:37.316647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.796 #36 NEW cov: 12535 ft: 15778 corp: 27/520b lim: 30 exec/s: 36 rss: 75Mb L: 14/28 MS: 1 ChangeBit- 00:06:33.055 [2024-11-20 07:02:37.356382] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300007a7a 00:06:33.055 [2024-11-20 07:02:37.356509] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.055 [2024-11-20 07:02:37.356636] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.055 [2024-11-20 07:02:37.356756] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009fff 00:06:33.055 [2024-11-20 07:02:37.356991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.055 [2024-11-20 07:02:37.357017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.055 [2024-11-20 07:02:37.357080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7a7a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.055 [2024-11-20 07:02:37.357095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.055 [2024-11-20 07:02:37.357155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.055 [2024-11-20 07:02:37.357168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.055 [2024-11-20 07:02:37.357228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.055 [2024-11-20 07:02:37.357242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.055 #37 NEW cov: 12535 ft: 15805 corp: 28/548b lim: 30 exec/s: 37 rss: 75Mb L: 28/28 MS: 1 ShuffleBytes- 00:06:33.055 [2024-11-20 07:02:37.416581] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300007a7a 00:06:33.055 [2024-11-20 07:02:37.416713] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.055 [2024-11-20 07:02:37.416830] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.055 [2024-11-20 07:02:37.416945] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009fff 00:06:33.055 [2024-11-20 07:02:37.417170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83fe cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.055 [2024-11-20 07:02:37.417194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.055 [2024-11-20 07:02:37.417256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7a7a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.055 [2024-11-20 07:02:37.417273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.055 [2024-11-20 07:02:37.417333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.055 [2024-11-20 07:02:37.417347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.055 [2024-11-20 07:02:37.417405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.055 [2024-11-20 07:02:37.417419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.055 #38 NEW cov: 12535 ft: 15806 corp: 29/576b lim: 30 exec/s: 38 rss: 75Mb L: 28/28 MS: 1 ChangeBinInt- 00:06:33.055 [2024-11-20 07:02:37.456689] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.055 [2024-11-20 07:02:37.456819] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:06:33.055 [2024-11-20 07:02:37.456942] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff02 00:06:33.055 [2024-11-20 07:02:37.457060] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff42 00:06:33.055 [2024-11-20 07:02:37.457284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.055 [2024-11-20 07:02:37.457310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.055 [2024-11-20 07:02:37.457370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.055 [2024-11-20 07:02:37.457385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.055 [2024-11-20 07:02:37.457442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.055 [2024-11-20 07:02:37.457455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.055 [2024-11-20 07:02:37.457511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff8331 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.056 [2024-11-20 07:02:37.457525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.056 #39 NEW cov: 12535 ft: 15829 corp: 30/600b lim: 30 exec/s: 39 rss: 75Mb L: 24/28 MS: 1 CopyPart- 00:06:33.056 [2024-11-20 07:02:37.516877] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300007a7a 00:06:33.056 [2024-11-20 07:02:37.517002] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.056 [2024-11-20 07:02:37.517116] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.056 [2024-11-20 07:02:37.517226] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.056 [2024-11-20 07:02:37.517452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:fff283ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.056 [2024-11-20 07:02:37.517479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.056 [2024-11-20 07:02:37.517537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7a7a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.056 [2024-11-20 07:02:37.517551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.056 [2024-11-20 07:02:37.517612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff8312 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.056 [2024-11-20 07:02:37.517626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.056 [2024-11-20 07:02:37.517679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.056 [2024-11-20 07:02:37.517693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.056 #40 NEW cov: 12535 ft: 15838 corp: 31/627b lim: 30 exec/s: 40 rss: 75Mb L: 27/28 MS: 1 ChangeByte- 00:06:33.056 [2024-11-20 07:02:37.577019] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.056 [2024-11-20 07:02:37.577146] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:06:33.056 [2024-11-20 07:02:37.577284] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff02 00:06:33.056 [2024-11-20 07:02:37.577403] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff42 00:06:33.056 [2024-11-20 07:02:37.577632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.056 [2024-11-20 07:02:37.577659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.056 [2024-11-20 07:02:37.577718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff02df cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.056 [2024-11-20 07:02:37.577732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.056 [2024-11-20 07:02:37.577788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.056 [2024-11-20 07:02:37.577802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.056 [2024-11-20 07:02:37.577860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff8331 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.056 [2024-11-20 07:02:37.577873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.315 #41 NEW cov: 12535 ft: 15846 corp: 32/651b lim: 30 exec/s: 41 rss: 75Mb L: 24/28 MS: 1 ChangeBit- 00:06:33.315 [2024-11-20 07:02:37.637209] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.315 [2024-11-20 07:02:37.637357] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.315 [2024-11-20 07:02:37.637482] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.315 [2024-11-20 07:02:37.637730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:41ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.315 [2024-11-20 07:02:37.637756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.315 [2024-11-20 07:02:37.637814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.315 [2024-11-20 07:02:37.637828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.315 [2024-11-20 07:02:37.637885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.315 [2024-11-20 07:02:37.637898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.315 #42 NEW cov: 12535 ft: 15855 corp: 33/674b lim: 30 exec/s: 42 rss: 75Mb L: 23/28 MS: 1 ShuffleBytes- 00:06:33.315 [2024-11-20 07:02:37.697339] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.315 [2024-11-20 07:02:37.697467] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.315 [2024-11-20 07:02:37.697587] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000fff2 00:06:33.315 [2024-11-20 07:02:37.697826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.315 [2024-11-20 07:02:37.697852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.315 [2024-11-20 07:02:37.697914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.315 [2024-11-20 07:02:37.697928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.315 [2024-11-20 07:02:37.697988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.315 [2024-11-20 07:02:37.698001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.315 #43 NEW cov: 12535 ft: 15865 corp: 34/697b lim: 30 exec/s: 43 rss: 75Mb L: 23/28 MS: 1 ChangeByte- 00:06:33.315 [2024-11-20 07:02:37.737438] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.315 [2024-11-20 07:02:37.737583] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.315 [2024-11-20 07:02:37.737724] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff42 00:06:33.315 [2024-11-20 07:02:37.737946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.315 [2024-11-20 07:02:37.737972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.315 [2024-11-20 07:02:37.738032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.315 [2024-11-20 07:02:37.738047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.315 [2024-11-20 07:02:37.738101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:f2ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.315 [2024-11-20 07:02:37.738114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.315 #44 NEW cov: 12535 ft: 15882 corp: 35/715b lim: 30 exec/s: 44 rss: 75Mb L: 18/28 MS: 1 EraseBytes- 00:06:33.315 [2024-11-20 07:02:37.797645] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:33.315 [2024-11-20 07:02:37.797772] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:06:33.315 [2024-11-20 07:02:37.797893] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (668512) > buf size (4096) 00:06:33.315 [2024-11-20 07:02:37.798121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.315 [2024-11-20 07:02:37.798147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.315 [2024-11-20 07:02:37.798206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.316 [2024-11-20 07:02:37.798221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.316 [2024-11-20 07:02:37.798283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:8cd702f0 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.316 [2024-11-20 07:02:37.798296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.316 #45 NEW cov: 12535 ft: 15941 corp: 36/738b lim: 30 exec/s: 22 rss: 75Mb L: 23/28 MS: 1 PersAutoDict- DE: "\000\214\327\360F\240\300\212"- 00:06:33.316 #45 DONE cov: 12535 ft: 15941 corp: 36/738b lim: 30 exec/s: 22 rss: 75Mb 00:06:33.316 ###### Recommended dictionary. ###### 00:06:33.316 "\000\214\327\360F\240\300\212" # Uses: 1 00:06:33.316 ###### End of recommended dictionary. ###### 00:06:33.316 Done 45 runs in 2 second(s) 00:06:33.575 07:02:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:06:33.575 07:02:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:33.575 07:02:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:33.575 07:02:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:06:33.575 07:02:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:06:33.575 07:02:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:33.575 07:02:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:33.575 07:02:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:33.575 07:02:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:06:33.575 07:02:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:33.575 07:02:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:33.575 07:02:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:06:33.575 07:02:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:06:33.575 07:02:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:33.575 07:02:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:06:33.576 07:02:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:33.576 07:02:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:33.576 07:02:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:33.576 07:02:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:06:33.576 [2024-11-20 07:02:37.967733] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:06:33.576 [2024-11-20 07:02:37.967804] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3775259 ] 00:06:33.835 [2024-11-20 07:02:38.224064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.835 [2024-11-20 07:02:38.280812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.835 [2024-11-20 07:02:38.339891] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.835 [2024-11-20 07:02:38.356204] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:06:33.835 INFO: Running with entropic power schedule (0xFF, 100). 00:06:33.835 INFO: Seed: 3986741850 00:06:33.835 INFO: Loaded 1 modules (388174 inline 8-bit counters): 388174 [0x2c4d34c, 0x2cabf9a), 00:06:33.835 INFO: Loaded 1 PC tables (388174 PCs): 388174 [0x2cabfa0,0x3298480), 00:06:33.835 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:33.835 INFO: A corpus is not provided, starting from an empty corpus 00:06:33.835 #2 INITED exec/s: 0 rss: 65Mb 00:06:33.835 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:33.835 This may also happen if the target rejected all inputs we tried so far 00:06:34.094 [2024-11-20 07:02:38.411550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a00000a cdw11:73007300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.094 [2024-11-20 07:02:38.411578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.353 NEW_FUNC[1/715]: 0x43ef78 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:06:34.353 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:34.353 #15 NEW cov: 12228 ft: 12224 corp: 2/8b lim: 35 exec/s: 0 rss: 72Mb L: 7/7 MS: 3 CrossOver-CMP-CopyPart- DE: "\000s"- 00:06:34.353 [2024-11-20 07:02:38.732937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:d4d40008 cdw11:d400d4d4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.353 [2024-11-20 07:02:38.732969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.353 [2024-11-20 07:02:38.733030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d4d400d4 cdw11:d400d4d4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.353 [2024-11-20 07:02:38.733045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.353 [2024-11-20 07:02:38.733106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d4d400d4 cdw11:d400d4d4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.353 [2024-11-20 07:02:38.733120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.353 [2024-11-20 07:02:38.733181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:d4d400d4 cdw11:d400d4d4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.353 [2024-11-20 07:02:38.733195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.353 #22 NEW cov: 12341 ft: 13540 corp: 3/38b lim: 35 exec/s: 0 rss: 73Mb L: 30/30 MS: 2 ChangeBit-InsertRepeatedBytes- 00:06:34.353 [2024-11-20 07:02:38.772544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a00000a cdw11:73007300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.353 [2024-11-20 07:02:38.772571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.353 #28 NEW cov: 12347 ft: 13770 corp: 4/45b lim: 35 exec/s: 0 rss: 73Mb L: 7/30 MS: 1 ShuffleBytes- 00:06:34.353 [2024-11-20 07:02:38.832698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a73000a cdw11:73000073 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.353 [2024-11-20 07:02:38.832726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.353 #29 NEW cov: 12432 ft: 13930 corp: 5/52b lim: 35 exec/s: 0 rss: 73Mb L: 7/30 MS: 1 CrossOver- 00:06:34.353 [2024-11-20 07:02:38.892905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a00000a cdw11:73007300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.353 [2024-11-20 07:02:38.892932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.612 #30 NEW cov: 12432 ft: 14050 corp: 6/63b lim: 35 exec/s: 0 rss: 73Mb L: 11/30 MS: 1 CopyPart- 00:06:34.612 [2024-11-20 07:02:38.932981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a0a005a cdw11:73007300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.612 [2024-11-20 07:02:38.933009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.612 #31 NEW cov: 12432 ft: 14090 corp: 7/71b lim: 35 exec/s: 0 rss: 73Mb L: 8/30 MS: 1 InsertByte- 00:06:34.612 [2024-11-20 07:02:38.993582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:d4d40008 cdw11:d400d4d4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.612 [2024-11-20 07:02:38.993613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.612 [2024-11-20 07:02:38.993673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d4d400d4 cdw11:d400d4d4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.612 [2024-11-20 07:02:38.993687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.612 [2024-11-20 07:02:38.993762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d4d400d4 cdw11:d400d4d4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.612 [2024-11-20 07:02:38.993777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.612 [2024-11-20 07:02:38.993836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:d4d400d4 cdw11:d400d4d4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.612 [2024-11-20 07:02:38.993850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.612 #32 NEW cov: 12432 ft: 14225 corp: 8/101b lim: 35 exec/s: 0 rss: 73Mb L: 30/30 MS: 1 CopyPart- 00:06:34.612 [2024-11-20 07:02:39.053294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000073 cdw11:0a000073 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.612 [2024-11-20 07:02:39.053320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.612 #33 NEW cov: 12432 ft: 14256 corp: 9/108b lim: 35 exec/s: 0 rss: 73Mb L: 7/30 MS: 1 ShuffleBytes- 00:06:34.612 [2024-11-20 07:02:39.093398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a73000a cdw11:73000073 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.612 [2024-11-20 07:02:39.093425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.612 #34 NEW cov: 12432 ft: 14279 corp: 10/115b lim: 35 exec/s: 0 rss: 73Mb L: 7/30 MS: 1 CrossOver- 00:06:34.612 [2024-11-20 07:02:39.133538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a73005a cdw11:73000a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.612 [2024-11-20 07:02:39.133565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.872 #35 NEW cov: 12432 ft: 14349 corp: 11/123b lim: 35 exec/s: 0 rss: 73Mb L: 8/30 MS: 1 CopyPart- 00:06:34.872 [2024-11-20 07:02:39.193758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:000a000a cdw11:73007300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.872 [2024-11-20 07:02:39.193785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.872 #36 NEW cov: 12432 ft: 14353 corp: 12/130b lim: 35 exec/s: 0 rss: 73Mb L: 7/30 MS: 1 ShuffleBytes- 00:06:34.872 [2024-11-20 07:02:39.234251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:d4d40008 cdw11:d4000073 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.872 [2024-11-20 07:02:39.234277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.872 [2024-11-20 07:02:39.234339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d4d400d4 cdw11:d400d4d4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.872 [2024-11-20 07:02:39.234353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.872 [2024-11-20 07:02:39.234419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d4d400d4 cdw11:d400d4d4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.872 [2024-11-20 07:02:39.234432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.872 [2024-11-20 07:02:39.234488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:d4d400d4 cdw11:d400d4d4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.872 [2024-11-20 07:02:39.234502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.872 #37 NEW cov: 12432 ft: 14418 corp: 13/162b lim: 35 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 PersAutoDict- DE: "\000s"- 00:06:34.872 [2024-11-20 07:02:39.294002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a0a005a cdw11:00007300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.872 [2024-11-20 07:02:39.294028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.872 NEW_FUNC[1/1]: 0x1c34f28 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:34.873 #38 NEW cov: 12455 ft: 14485 corp: 14/172b lim: 35 exec/s: 0 rss: 73Mb L: 10/32 MS: 1 PersAutoDict- DE: "\000s"- 00:06:34.873 [2024-11-20 07:02:39.334271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:000a000a cdw11:ff007300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.873 [2024-11-20 07:02:39.334297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.873 [2024-11-20 07:02:39.334358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.873 [2024-11-20 07:02:39.334371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.873 #39 NEW cov: 12455 ft: 14760 corp: 15/192b lim: 35 exec/s: 0 rss: 74Mb L: 20/32 MS: 1 InsertRepeatedBytes- 00:06:34.873 [2024-11-20 07:02:39.394310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a73005a cdw11:73000a31 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.873 [2024-11-20 07:02:39.394337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.131 #40 NEW cov: 12455 ft: 14789 corp: 16/200b lim: 35 exec/s: 40 rss: 74Mb L: 8/32 MS: 1 ChangeByte- 00:06:35.131 [2024-11-20 07:02:39.454872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:d4d40008 cdw11:d400d4d4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.131 [2024-11-20 07:02:39.454898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.131 [2024-11-20 07:02:39.454959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d4d400d4 cdw11:d400d4d4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.131 [2024-11-20 07:02:39.454973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.131 [2024-11-20 07:02:39.455033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d4d400d4 cdw11:d400d4d4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.131 [2024-11-20 07:02:39.455047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.131 [2024-11-20 07:02:39.455105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:d4d400d4 cdw11:ff00feff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.131 [2024-11-20 07:02:39.455119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.131 #41 NEW cov: 12455 ft: 14809 corp: 17/230b lim: 35 exec/s: 41 rss: 74Mb L: 30/32 MS: 1 CMP- DE: "\376\377\377\377"- 00:06:35.131 [2024-11-20 07:02:39.495005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:d4d40008 cdw11:d400d4d4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.131 [2024-11-20 07:02:39.495032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.131 [2024-11-20 07:02:39.495108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d4d400d4 cdw11:d400d4d4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.131 [2024-11-20 07:02:39.495122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.131 [2024-11-20 07:02:39.495180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d4d400d4 cdw11:d400d4d4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.131 [2024-11-20 07:02:39.495194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.131 [2024-11-20 07:02:39.495249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:d4d400d4 cdw11:ff00feff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.131 [2024-11-20 07:02:39.495263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.131 #42 NEW cov: 12455 ft: 14862 corp: 18/260b lim: 35 exec/s: 42 rss: 74Mb L: 30/32 MS: 1 CopyPart- 00:06:35.131 [2024-11-20 07:02:39.554724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a73000a cdw11:73000073 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.131 [2024-11-20 07:02:39.554758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.131 #48 NEW cov: 12455 ft: 14879 corp: 19/271b lim: 35 exec/s: 48 rss: 74Mb L: 11/32 MS: 1 PersAutoDict- DE: "\376\377\377\377"- 00:06:35.131 [2024-11-20 07:02:39.595255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:d4d40008 cdw11:d400d4d4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.131 [2024-11-20 07:02:39.595282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.131 [2024-11-20 07:02:39.595343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d4d400d4 cdw11:d400d4d4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.132 [2024-11-20 07:02:39.595357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.132 [2024-11-20 07:02:39.595418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d4d400d4 cdw11:d400d4d4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.132 [2024-11-20 07:02:39.595432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.132 [2024-11-20 07:02:39.595492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:d4d400d4 cdw11:d400d4d4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.132 [2024-11-20 07:02:39.595506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.132 #54 NEW cov: 12455 ft: 14906 corp: 20/301b lim: 35 exec/s: 54 rss: 74Mb L: 30/32 MS: 1 ChangeBinInt- 00:06:35.132 [2024-11-20 07:02:39.635035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a00000a cdw11:00002d73 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.132 [2024-11-20 07:02:39.635061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.132 #55 NEW cov: 12455 ft: 14918 corp: 21/309b lim: 35 exec/s: 55 rss: 74Mb L: 8/32 MS: 1 InsertByte- 00:06:35.132 [2024-11-20 07:02:39.675065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a00000a cdw11:73002d00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.132 [2024-11-20 07:02:39.675094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.390 #56 NEW cov: 12455 ft: 14941 corp: 22/317b lim: 35 exec/s: 56 rss: 74Mb L: 8/32 MS: 1 PersAutoDict- DE: "\000s"- 00:06:35.390 [2024-11-20 07:02:39.735224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a73005a cdw11:73000a31 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.390 [2024-11-20 07:02:39.735249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.390 #57 NEW cov: 12455 ft: 14949 corp: 23/325b lim: 35 exec/s: 57 rss: 74Mb L: 8/32 MS: 1 CopyPart- 00:06:35.390 [2024-11-20 07:02:39.795409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000073 cdw11:0a000087 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.390 [2024-11-20 07:02:39.795435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.390 #58 NEW cov: 12455 ft: 14981 corp: 24/332b lim: 35 exec/s: 58 rss: 74Mb L: 7/32 MS: 1 ChangeByte- 00:06:35.390 [2024-11-20 07:02:39.855720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:000a000a cdw11:ff007300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.390 [2024-11-20 07:02:39.855746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.390 [2024-11-20 07:02:39.855822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00dfff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.390 [2024-11-20 07:02:39.855836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.390 #59 NEW cov: 12455 ft: 14999 corp: 25/352b lim: 35 exec/s: 59 rss: 74Mb L: 20/32 MS: 1 ChangeBit- 00:06:35.390 [2024-11-20 07:02:39.915731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a73005a cdw11:73000a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.390 [2024-11-20 07:02:39.915757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.390 #60 NEW cov: 12455 ft: 15059 corp: 26/364b lim: 35 exec/s: 60 rss: 74Mb L: 12/32 MS: 1 PersAutoDict- DE: "\376\377\377\377"- 00:06:35.649 [2024-11-20 07:02:39.956238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:d4d40008 cdw11:d400d4d4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.649 [2024-11-20 07:02:39.956264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.649 [2024-11-20 07:02:39.956324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d4d400d4 cdw11:d400d4d4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.649 [2024-11-20 07:02:39.956338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.649 [2024-11-20 07:02:39.956396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d4d400d4 cdw11:d400d4d4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.649 [2024-11-20 07:02:39.956410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.649 [2024-11-20 07:02:39.956468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:d4d400d4 cdw11:dc00fedc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.649 [2024-11-20 07:02:39.956482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.649 #61 NEW cov: 12455 ft: 15080 corp: 27/398b lim: 35 exec/s: 61 rss: 74Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:06:35.649 [2024-11-20 07:02:40.016462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0aff008c cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.649 [2024-11-20 07:02:40.016488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.649 [2024-11-20 07:02:40.016570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.649 [2024-11-20 07:02:40.016585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.649 [2024-11-20 07:02:40.016646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.649 [2024-11-20 07:02:40.016661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.649 [2024-11-20 07:02:40.016721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.649 [2024-11-20 07:02:40.016735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.649 #65 NEW cov: 12455 ft: 15092 corp: 28/428b lim: 35 exec/s: 65 rss: 74Mb L: 30/34 MS: 4 InsertByte-CopyPart-ChangeByte-InsertRepeatedBytes- 00:06:35.649 [2024-11-20 07:02:40.056625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:d4d40008 cdw11:d4000073 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.649 [2024-11-20 07:02:40.056651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.649 [2024-11-20 07:02:40.056729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d4d600d4 cdw11:d400d4d4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.649 [2024-11-20 07:02:40.056744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.649 [2024-11-20 07:02:40.056806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d4d400d4 cdw11:d400d4d4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.650 [2024-11-20 07:02:40.056820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.650 [2024-11-20 07:02:40.056879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:d4d400d4 cdw11:d400d4d4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.650 [2024-11-20 07:02:40.056894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.650 #66 NEW cov: 12455 ft: 15111 corp: 29/460b lim: 35 exec/s: 66 rss: 75Mb L: 32/34 MS: 1 ChangeBit- 00:06:35.650 [2024-11-20 07:02:40.116798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:d4d40008 cdw11:d4000073 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.650 [2024-11-20 07:02:40.116824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.650 [2024-11-20 07:02:40.116886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d4d400d4 cdw11:d400d45d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.650 [2024-11-20 07:02:40.116900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.650 [2024-11-20 07:02:40.116959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d4d400d4 cdw11:d400d4d4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.650 [2024-11-20 07:02:40.116974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.650 [2024-11-20 07:02:40.117033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:d4d400d4 cdw11:d400d4d4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.650 [2024-11-20 07:02:40.117047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.650 #67 NEW cov: 12455 ft: 15122 corp: 30/492b lim: 35 exec/s: 67 rss: 75Mb L: 32/34 MS: 1 ChangeByte- 00:06:35.650 [2024-11-20 07:02:40.156431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a73000a cdw11:73000073 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.650 [2024-11-20 07:02:40.156456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.650 #68 NEW cov: 12455 ft: 15130 corp: 31/503b lim: 35 exec/s: 68 rss: 75Mb L: 11/34 MS: 1 CMP- DE: "\361\000"- 00:06:35.910 [2024-11-20 07:02:40.217028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:d4d40008 cdw11:d4000073 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.910 [2024-11-20 07:02:40.217054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.910 [2024-11-20 07:02:40.217130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d4d400d4 cdw11:d400d45d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.910 [2024-11-20 07:02:40.217144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.910 [2024-11-20 07:02:40.217201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d4d400d4 cdw11:d400d4d4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.910 [2024-11-20 07:02:40.217215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.910 [2024-11-20 07:02:40.217272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:d4d400d4 cdw11:d400d4d4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.910 [2024-11-20 07:02:40.217285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.910 #69 NEW cov: 12455 ft: 15159 corp: 32/537b lim: 35 exec/s: 69 rss: 75Mb L: 34/34 MS: 1 PersAutoDict- DE: "\000s"- 00:06:35.910 [2024-11-20 07:02:40.276794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a0000f2 cdw11:73002d00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.910 [2024-11-20 07:02:40.276821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.910 #70 NEW cov: 12455 ft: 15205 corp: 33/545b lim: 35 exec/s: 70 rss: 75Mb L: 8/34 MS: 1 ChangeBinInt- 00:06:35.910 [2024-11-20 07:02:40.337398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:d4d40008 cdw11:d400d428 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.910 [2024-11-20 07:02:40.337425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.910 [2024-11-20 07:02:40.337503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d4d400d4 cdw11:d400d4d4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.910 [2024-11-20 07:02:40.337518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.910 [2024-11-20 07:02:40.337576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d4d400d4 cdw11:d400d4d4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.910 [2024-11-20 07:02:40.337589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.910 [2024-11-20 07:02:40.337653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:d4d400d4 cdw11:dc00fedc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.910 [2024-11-20 07:02:40.337666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.910 #71 NEW cov: 12455 ft: 15210 corp: 34/579b lim: 35 exec/s: 71 rss: 75Mb L: 34/34 MS: 1 ChangeByte- 00:06:35.910 [2024-11-20 07:02:40.397104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a73005a cdw11:73000800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.910 [2024-11-20 07:02:40.397134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.910 #72 NEW cov: 12455 ft: 15241 corp: 35/587b lim: 35 exec/s: 36 rss: 75Mb L: 8/34 MS: 1 ChangeBinInt- 00:06:35.910 #72 DONE cov: 12455 ft: 15241 corp: 35/587b lim: 35 exec/s: 36 rss: 75Mb 00:06:35.910 ###### Recommended dictionary. ###### 00:06:35.910 "\000s" # Uses: 5 00:06:35.910 "\376\377\377\377" # Uses: 2 00:06:35.910 "\361\000" # Uses: 0 00:06:35.910 ###### End of recommended dictionary. ###### 00:06:35.910 Done 72 runs in 2 second(s) 00:06:36.169 07:02:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:06:36.169 07:02:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:36.169 07:02:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:36.169 07:02:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:06:36.169 07:02:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:06:36.169 07:02:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:36.169 07:02:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:36.169 07:02:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:36.169 07:02:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:06:36.169 07:02:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:36.169 07:02:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:36.169 07:02:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:06:36.169 07:02:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:06:36.169 07:02:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:36.169 07:02:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:06:36.169 07:02:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:36.169 07:02:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:36.169 07:02:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:36.169 07:02:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:06:36.169 [2024-11-20 07:02:40.570901] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:06:36.169 [2024-11-20 07:02:40.570971] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3775670 ] 00:06:36.428 [2024-11-20 07:02:40.831089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.428 [2024-11-20 07:02:40.885067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.428 [2024-11-20 07:02:40.944430] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:36.428 [2024-11-20 07:02:40.960786] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:06:36.428 INFO: Running with entropic power schedule (0xFF, 100). 00:06:36.428 INFO: Seed: 2295762984 00:06:36.687 INFO: Loaded 1 modules (388174 inline 8-bit counters): 388174 [0x2c4d34c, 0x2cabf9a), 00:06:36.687 INFO: Loaded 1 PC tables (388174 PCs): 388174 [0x2cabfa0,0x3298480), 00:06:36.687 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:36.687 INFO: A corpus is not provided, starting from an empty corpus 00:06:36.687 #2 INITED exec/s: 0 rss: 65Mb 00:06:36.687 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:36.687 This may also happen if the target rejected all inputs we tried so far 00:06:36.946 NEW_FUNC[1/704]: 0x440c58 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:06:36.946 NEW_FUNC[2/704]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:36.946 #8 NEW cov: 12139 ft: 12131 corp: 2/13b lim: 20 exec/s: 0 rss: 72Mb L: 12/12 MS: 1 InsertRepeatedBytes- 00:06:36.946 #9 NEW cov: 12253 ft: 13112 corp: 3/22b lim: 20 exec/s: 0 rss: 73Mb L: 9/12 MS: 1 EraseBytes- 00:06:36.946 #10 NEW cov: 12259 ft: 13280 corp: 4/32b lim: 20 exec/s: 0 rss: 73Mb L: 10/12 MS: 1 InsertByte- 00:06:36.946 #21 NEW cov: 12344 ft: 13658 corp: 5/41b lim: 20 exec/s: 0 rss: 73Mb L: 9/12 MS: 1 ShuffleBytes- 00:06:37.204 #22 NEW cov: 12344 ft: 13699 corp: 6/50b lim: 20 exec/s: 0 rss: 73Mb L: 9/12 MS: 1 ChangeByte- 00:06:37.205 #23 NEW cov: 12344 ft: 13757 corp: 7/59b lim: 20 exec/s: 0 rss: 73Mb L: 9/12 MS: 1 ChangeBit- 00:06:37.205 #24 NEW cov: 12344 ft: 13811 corp: 8/70b lim: 20 exec/s: 0 rss: 73Mb L: 11/12 MS: 1 EraseBytes- 00:06:37.205 #25 NEW cov: 12344 ft: 13837 corp: 9/82b lim: 20 exec/s: 0 rss: 73Mb L: 12/12 MS: 1 ChangeBinInt- 00:06:37.205 #26 NEW cov: 12344 ft: 13899 corp: 10/95b lim: 20 exec/s: 0 rss: 73Mb L: 13/13 MS: 1 CMP- DE: "\004\000"- 00:06:37.205 #27 NEW cov: 12344 ft: 13935 corp: 11/108b lim: 20 exec/s: 0 rss: 73Mb L: 13/13 MS: 1 CMP- DE: "\000\000\001\000"- 00:06:37.463 #28 NEW cov: 12344 ft: 13968 corp: 12/117b lim: 20 exec/s: 0 rss: 73Mb L: 9/13 MS: 1 CrossOver- 00:06:37.463 [2024-11-20 07:02:41.818783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:37.463 [2024-11-20 07:02:41.818821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.463 NEW_FUNC[1/19]: 0x136a898 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3482 00:06:37.464 NEW_FUNC[2/19]: 0x136b418 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3424 00:06:37.464 #29 NEW cov: 12663 ft: 14584 corp: 13/136b lim: 20 exec/s: 0 rss: 73Mb L: 19/19 MS: 1 InsertRepeatedBytes- 00:06:37.464 NEW_FUNC[1/1]: 0x1c34f28 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:37.464 #30 NEW cov: 12686 ft: 14862 corp: 14/141b lim: 20 exec/s: 0 rss: 73Mb L: 5/19 MS: 1 EraseBytes- 00:06:37.464 #31 NEW cov: 12686 ft: 14927 corp: 15/161b lim: 20 exec/s: 0 rss: 73Mb L: 20/20 MS: 1 CopyPart- 00:06:37.464 #32 NEW cov: 12686 ft: 14972 corp: 16/171b lim: 20 exec/s: 0 rss: 73Mb L: 10/20 MS: 1 InsertByte- 00:06:37.722 #33 NEW cov: 12686 ft: 14986 corp: 17/191b lim: 20 exec/s: 33 rss: 74Mb L: 20/20 MS: 1 ChangeBinInt- 00:06:37.722 #34 NEW cov: 12686 ft: 14992 corp: 18/201b lim: 20 exec/s: 34 rss: 74Mb L: 10/20 MS: 1 ShuffleBytes- 00:06:37.722 #35 NEW cov: 12686 ft: 15041 corp: 19/220b lim: 20 exec/s: 35 rss: 74Mb L: 19/20 MS: 1 CopyPart- 00:06:37.722 #36 NEW cov: 12686 ft: 15090 corp: 20/237b lim: 20 exec/s: 36 rss: 74Mb L: 17/20 MS: 1 InsertRepeatedBytes- 00:06:37.722 #37 NEW cov: 12686 ft: 15113 corp: 21/248b lim: 20 exec/s: 37 rss: 74Mb L: 11/20 MS: 1 ChangeByte- 00:06:37.981 #38 NEW cov: 12686 ft: 15122 corp: 22/253b lim: 20 exec/s: 38 rss: 74Mb L: 5/20 MS: 1 CrossOver- 00:06:37.981 [2024-11-20 07:02:42.320287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:37.981 [2024-11-20 07:02:42.320318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.981 #39 NEW cov: 12686 ft: 15161 corp: 23/270b lim: 20 exec/s: 39 rss: 74Mb L: 17/20 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:06:37.981 #40 NEW cov: 12686 ft: 15208 corp: 24/279b lim: 20 exec/s: 40 rss: 74Mb L: 9/20 MS: 1 CMP- DE: "\377\213\327\363Fw\3620"- 00:06:37.981 #41 NEW cov: 12686 ft: 15212 corp: 25/296b lim: 20 exec/s: 41 rss: 74Mb L: 17/20 MS: 1 EraseBytes- 00:06:37.981 #42 NEW cov: 12686 ft: 15229 corp: 26/314b lim: 20 exec/s: 42 rss: 74Mb L: 18/20 MS: 1 CopyPart- 00:06:38.239 #43 NEW cov: 12686 ft: 15243 corp: 27/325b lim: 20 exec/s: 43 rss: 74Mb L: 11/20 MS: 1 PersAutoDict- DE: "\004\000"- 00:06:38.239 #44 NEW cov: 12686 ft: 15246 corp: 28/330b lim: 20 exec/s: 44 rss: 74Mb L: 5/20 MS: 1 ChangeByte- 00:06:38.239 #45 NEW cov: 12686 ft: 15256 corp: 29/339b lim: 20 exec/s: 45 rss: 74Mb L: 9/20 MS: 1 ShuffleBytes- 00:06:38.239 #46 NEW cov: 12686 ft: 15328 corp: 30/356b lim: 20 exec/s: 46 rss: 74Mb L: 17/20 MS: 1 PersAutoDict- DE: "\377\213\327\363Fw\3620"- 00:06:38.239 #47 NEW cov: 12686 ft: 15348 corp: 31/365b lim: 20 exec/s: 47 rss: 74Mb L: 9/20 MS: 1 PersAutoDict- DE: "\000\000\001\000"- 00:06:38.239 [2024-11-20 07:02:42.761204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:38.239 [2024-11-20 07:02:42.761232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.239 #48 NEW cov: 12686 ft: 15455 corp: 32/376b lim: 20 exec/s: 48 rss: 74Mb L: 11/20 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:06:38.498 #49 NEW cov: 12686 ft: 15469 corp: 33/395b lim: 20 exec/s: 49 rss: 74Mb L: 19/20 MS: 1 ChangeBit- 00:06:38.498 #50 NEW cov: 12686 ft: 15473 corp: 34/405b lim: 20 exec/s: 50 rss: 74Mb L: 10/20 MS: 1 InsertByte- 00:06:38.498 #51 NEW cov: 12686 ft: 15481 corp: 35/422b lim: 20 exec/s: 51 rss: 75Mb L: 17/20 MS: 1 CopyPart- 00:06:38.498 #52 NEW cov: 12686 ft: 15494 corp: 36/431b lim: 20 exec/s: 52 rss: 75Mb L: 9/20 MS: 1 ChangeBit- 00:06:38.498 #53 NEW cov: 12686 ft: 15547 corp: 37/441b lim: 20 exec/s: 26 rss: 75Mb L: 10/20 MS: 1 ChangeBit- 00:06:38.498 #53 DONE cov: 12686 ft: 15547 corp: 37/441b lim: 20 exec/s: 26 rss: 75Mb 00:06:38.498 ###### Recommended dictionary. ###### 00:06:38.498 "\004\000" # Uses: 1 00:06:38.498 "\000\000\001\000" # Uses: 1 00:06:38.498 "\001\000\000\000\000\000\000\000" # Uses: 1 00:06:38.499 "\377\213\327\363Fw\3620" # Uses: 1 00:06:38.499 ###### End of recommended dictionary. ###### 00:06:38.499 Done 53 runs in 2 second(s) 00:06:38.758 07:02:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:06:38.758 07:02:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:38.758 07:02:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:38.758 07:02:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:06:38.758 07:02:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:06:38.758 07:02:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:38.758 07:02:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:38.758 07:02:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:38.758 07:02:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:06:38.758 07:02:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:38.758 07:02:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:38.758 07:02:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:06:38.758 07:02:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:06:38.758 07:02:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:38.758 07:02:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:06:38.758 07:02:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:38.758 07:02:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:38.758 07:02:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:38.758 07:02:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:06:38.758 [2024-11-20 07:02:43.195051] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:06:38.758 [2024-11-20 07:02:43.195135] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3776083 ] 00:06:39.017 [2024-11-20 07:02:43.461329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.017 [2024-11-20 07:02:43.516475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.275 [2024-11-20 07:02:43.576028] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:39.275 [2024-11-20 07:02:43.592370] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:06:39.275 INFO: Running with entropic power schedule (0xFF, 100). 00:06:39.275 INFO: Seed: 632809097 00:06:39.275 INFO: Loaded 1 modules (388174 inline 8-bit counters): 388174 [0x2c4d34c, 0x2cabf9a), 00:06:39.275 INFO: Loaded 1 PC tables (388174 PCs): 388174 [0x2cabfa0,0x3298480), 00:06:39.275 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:39.275 INFO: A corpus is not provided, starting from an empty corpus 00:06:39.275 #2 INITED exec/s: 0 rss: 65Mb 00:06:39.275 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:39.275 This may also happen if the target rejected all inputs we tried so far 00:06:39.275 [2024-11-20 07:02:43.659852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000600 cdw11:06000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.275 [2024-11-20 07:02:43.659891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.533 NEW_FUNC[1/716]: 0x441d58 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:06:39.533 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:39.533 #6 NEW cov: 12249 ft: 12243 corp: 2/11b lim: 35 exec/s: 0 rss: 73Mb L: 10/10 MS: 4 ShuffleBytes-ChangeBinInt-CMP-CopyPart- DE: "\006\000\000\000"- 00:06:39.533 [2024-11-20 07:02:44.000107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:34340a34 cdw11:34340000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.533 [2024-11-20 07:02:44.000156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.533 #10 NEW cov: 12362 ft: 12864 corp: 3/24b lim: 35 exec/s: 0 rss: 73Mb L: 13/13 MS: 4 CopyPart-CrossOver-EraseBytes-InsertRepeatedBytes- 00:06:39.533 [2024-11-20 07:02:44.050059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff8aff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.533 [2024-11-20 07:02:44.050086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.533 #13 NEW cov: 12368 ft: 13086 corp: 4/33b lim: 35 exec/s: 0 rss: 73Mb L: 9/13 MS: 3 ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:06:39.792 [2024-11-20 07:02:44.100203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff8aff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.792 [2024-11-20 07:02:44.100233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.792 #19 NEW cov: 12453 ft: 13403 corp: 5/42b lim: 35 exec/s: 0 rss: 73Mb L: 9/13 MS: 1 ChangeBinInt- 00:06:39.792 [2024-11-20 07:02:44.171513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:cccccccc cdw11:cccc0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.792 [2024-11-20 07:02:44.171541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.792 [2024-11-20 07:02:44.171671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:cccccccc cdw11:cccc0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.792 [2024-11-20 07:02:44.171690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.792 [2024-11-20 07:02:44.171810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:cccccccc cdw11:cccc0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.792 [2024-11-20 07:02:44.171827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.792 [2024-11-20 07:02:44.171948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:cccccccc cdw11:cccc0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.792 [2024-11-20 07:02:44.171966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.792 [2024-11-20 07:02:44.172079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:cccccccc cdw11:cccc0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.792 [2024-11-20 07:02:44.172094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:39.792 #21 NEW cov: 12453 ft: 14335 corp: 6/77b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 2 ChangeByte-InsertRepeatedBytes- 00:06:39.792 [2024-11-20 07:02:44.220561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff8aff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.792 [2024-11-20 07:02:44.220589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.792 #22 NEW cov: 12453 ft: 14434 corp: 7/86b lim: 35 exec/s: 0 rss: 73Mb L: 9/35 MS: 1 ShuffleBytes- 00:06:39.792 [2024-11-20 07:02:44.271773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:cccccccc cdw11:cccc0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.792 [2024-11-20 07:02:44.271800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.792 [2024-11-20 07:02:44.271917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:cccccccc cdw11:cccc0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.792 [2024-11-20 07:02:44.271946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.792 [2024-11-20 07:02:44.272065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:cccccccc cdw11:cccc0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.792 [2024-11-20 07:02:44.272082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.792 [2024-11-20 07:02:44.272194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:cccccccc cdw11:cccc0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.792 [2024-11-20 07:02:44.272209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.792 [2024-11-20 07:02:44.272325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:cccccccc cdw11:cccc0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.792 [2024-11-20 07:02:44.272343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:39.792 #23 NEW cov: 12453 ft: 14533 corp: 8/121b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 CopyPart- 00:06:39.792 [2024-11-20 07:02:44.340971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff038a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.792 [2024-11-20 07:02:44.341000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.051 #26 NEW cov: 12453 ft: 14696 corp: 9/131b lim: 35 exec/s: 0 rss: 73Mb L: 10/35 MS: 3 ChangeBit-ChangeBit-CrossOver- 00:06:40.051 [2024-11-20 07:02:44.391102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff038a cdw11:06000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.051 [2024-11-20 07:02:44.391131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.051 #27 NEW cov: 12453 ft: 14743 corp: 10/141b lim: 35 exec/s: 0 rss: 73Mb L: 10/35 MS: 1 PersAutoDict- DE: "\006\000\000\000"- 00:06:40.051 [2024-11-20 07:02:44.461386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000376 cdw11:f9ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.051 [2024-11-20 07:02:44.461415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.051 #28 NEW cov: 12453 ft: 14783 corp: 11/151b lim: 35 exec/s: 0 rss: 74Mb L: 10/35 MS: 1 ChangeBinInt- 00:06:40.051 [2024-11-20 07:02:44.531781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:3434020a cdw11:34340000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.051 [2024-11-20 07:02:44.531809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.051 [2024-11-20 07:02:44.531925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:34c33434 cdw11:34340000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.051 [2024-11-20 07:02:44.531954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.051 NEW_FUNC[1/1]: 0x1c34f28 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:40.051 #32 NEW cov: 12476 ft: 15076 corp: 12/166b lim: 35 exec/s: 0 rss: 74Mb L: 15/35 MS: 4 ChangeBit-InsertByte-ChangeByte-CrossOver- 00:06:40.052 [2024-11-20 07:02:44.581578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000600 cdw11:00060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.052 [2024-11-20 07:02:44.581608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.311 #33 NEW cov: 12476 ft: 15088 corp: 13/176b lim: 35 exec/s: 0 rss: 74Mb L: 10/35 MS: 1 ShuffleBytes- 00:06:40.311 [2024-11-20 07:02:44.652481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:34340a34 cdw11:34340000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.311 [2024-11-20 07:02:44.652510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.311 [2024-11-20 07:02:44.652636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:34343434 cdw11:34340000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.311 [2024-11-20 07:02:44.652655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.311 [2024-11-20 07:02:44.652773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:34343434 cdw11:34340000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.311 [2024-11-20 07:02:44.652790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.311 #34 NEW cov: 12476 ft: 15298 corp: 14/199b lim: 35 exec/s: 34 rss: 74Mb L: 23/35 MS: 1 CopyPart- 00:06:40.311 [2024-11-20 07:02:44.712821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.311 [2024-11-20 07:02:44.712849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.311 [2024-11-20 07:02:44.712979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:3434ff34 cdw11:34340000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.311 [2024-11-20 07:02:44.712996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.311 [2024-11-20 07:02:44.713125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:34343434 cdw11:34340000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.311 [2024-11-20 07:02:44.713140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.311 [2024-11-20 07:02:44.713266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:34343434 cdw11:34340000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.311 [2024-11-20 07:02:44.713283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.311 #35 NEW cov: 12476 ft: 15350 corp: 15/229b lim: 35 exec/s: 35 rss: 74Mb L: 30/35 MS: 1 CrossOver- 00:06:40.311 [2024-11-20 07:02:44.782195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff038a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.311 [2024-11-20 07:02:44.782223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.312 #41 NEW cov: 12476 ft: 15383 corp: 16/239b lim: 35 exec/s: 41 rss: 74Mb L: 10/35 MS: 1 ChangeBit- 00:06:40.312 [2024-11-20 07:02:44.833192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:3434020a cdw11:34340000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.312 [2024-11-20 07:02:44.833222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.312 [2024-11-20 07:02:44.833343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:34c33434 cdw11:34340000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.312 [2024-11-20 07:02:44.833358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.312 [2024-11-20 07:02:44.833477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:cccccccc cdw11:cccc0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.312 [2024-11-20 07:02:44.833493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.312 [2024-11-20 07:02:44.833615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:cccc34cc cdw11:cccc0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.312 [2024-11-20 07:02:44.833648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.571 #42 NEW cov: 12476 ft: 15401 corp: 17/268b lim: 35 exec/s: 42 rss: 74Mb L: 29/35 MS: 1 CrossOver- 00:06:40.571 [2024-11-20 07:02:44.893755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:cccccccc cdw11:cccc0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.571 [2024-11-20 07:02:44.893787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.571 [2024-11-20 07:02:44.893907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:cccccccc cdw11:cccc0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.571 [2024-11-20 07:02:44.893924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.571 [2024-11-20 07:02:44.894043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:cccccccc cdw11:cccc0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.571 [2024-11-20 07:02:44.894063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.571 [2024-11-20 07:02:44.894174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:cccccccc cdw11:cccc0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.571 [2024-11-20 07:02:44.894192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.571 [2024-11-20 07:02:44.894317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:cc4ccccc cdw11:cccc0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.571 [2024-11-20 07:02:44.894335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:40.571 #43 NEW cov: 12476 ft: 15411 corp: 18/303b lim: 35 exec/s: 43 rss: 74Mb L: 35/35 MS: 1 ChangeBit- 00:06:40.571 [2024-11-20 07:02:44.943755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:cccccccc cdw11:cccc0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.571 [2024-11-20 07:02:44.943783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.571 [2024-11-20 07:02:44.943904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:cccccccc cdw11:cccc0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.571 [2024-11-20 07:02:44.943922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.571 [2024-11-20 07:02:44.944041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:cccccccc cdw11:cccc0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.571 [2024-11-20 07:02:44.944057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.571 [2024-11-20 07:02:44.944183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:cccccccc cdw11:cccc0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.571 [2024-11-20 07:02:44.944202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.571 [2024-11-20 07:02:44.944330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:cc4ccccc cdw11:cccc0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.571 [2024-11-20 07:02:44.944345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:40.571 #44 NEW cov: 12476 ft: 15506 corp: 19/338b lim: 35 exec/s: 44 rss: 74Mb L: 35/35 MS: 1 CopyPart- 00:06:40.571 [2024-11-20 07:02:45.013743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:3434020a cdw11:34340000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.571 [2024-11-20 07:02:45.013770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.571 [2024-11-20 07:02:45.013886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:34c33434 cdw11:34340000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.571 [2024-11-20 07:02:45.013904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.571 [2024-11-20 07:02:45.014016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:cccccccc cdw11:cccc0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.571 [2024-11-20 07:02:45.014035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.571 [2024-11-20 07:02:45.014153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:cccc34cc cdw11:cc270003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.571 [2024-11-20 07:02:45.014172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.571 #45 NEW cov: 12476 ft: 15541 corp: 20/367b lim: 35 exec/s: 45 rss: 74Mb L: 29/35 MS: 1 ChangeByte- 00:06:40.571 [2024-11-20 07:02:45.083930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:cccccccc cdw11:cccc0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.571 [2024-11-20 07:02:45.083959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.571 [2024-11-20 07:02:45.084094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:cccccccc cdw11:cccc0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.571 [2024-11-20 07:02:45.084113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.571 [2024-11-20 07:02:45.084230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:cccccccc cdw11:cccc0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.571 [2024-11-20 07:02:45.084248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.571 [2024-11-20 07:02:45.084379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:cccccccc cdw11:cccc0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.571 [2024-11-20 07:02:45.084396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.571 #46 NEW cov: 12476 ft: 15561 corp: 21/398b lim: 35 exec/s: 46 rss: 74Mb L: 31/35 MS: 1 EraseBytes- 00:06:40.830 [2024-11-20 07:02:45.133525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff03ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.830 [2024-11-20 07:02:45.133554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.830 [2024-11-20 07:02:45.133677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8affffff cdw11:ff060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.830 [2024-11-20 07:02:45.133696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.830 #47 NEW cov: 12476 ft: 15578 corp: 22/416b lim: 35 exec/s: 47 rss: 74Mb L: 18/35 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:06:40.830 [2024-11-20 07:02:45.183716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:3434020a cdw11:34340000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.830 [2024-11-20 07:02:45.183744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.830 [2024-11-20 07:02:45.183875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:34363434 cdw11:c3340000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.830 [2024-11-20 07:02:45.183893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.830 #48 NEW cov: 12476 ft: 15611 corp: 23/432b lim: 35 exec/s: 48 rss: 74Mb L: 16/35 MS: 1 InsertByte- 00:06:40.830 [2024-11-20 07:02:45.233596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.830 [2024-11-20 07:02:45.233630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.830 #49 NEW cov: 12476 ft: 15617 corp: 24/442b lim: 35 exec/s: 49 rss: 75Mb L: 10/35 MS: 1 ChangeBinInt- 00:06:40.831 [2024-11-20 07:02:45.304384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:cccccccc cdw11:cccc0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.831 [2024-11-20 07:02:45.304413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.831 [2024-11-20 07:02:45.304538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:cccccccc cdw11:cccc0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.831 [2024-11-20 07:02:45.304556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.831 [2024-11-20 07:02:45.304698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:cccccccc cdw11:cccc0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.831 [2024-11-20 07:02:45.304718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.831 #50 NEW cov: 12476 ft: 15639 corp: 25/463b lim: 35 exec/s: 50 rss: 75Mb L: 21/35 MS: 1 EraseBytes- 00:06:40.831 [2024-11-20 07:02:45.374578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:34ff0a34 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.831 [2024-11-20 07:02:45.374610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.831 [2024-11-20 07:02:45.374736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f5ffffff cdw11:34340000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.831 [2024-11-20 07:02:45.374754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.831 [2024-11-20 07:02:45.374870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:34343434 cdw11:34340000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.831 [2024-11-20 07:02:45.374886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.090 #51 NEW cov: 12476 ft: 15641 corp: 26/484b lim: 35 exec/s: 51 rss: 75Mb L: 21/35 MS: 1 CrossOver- 00:06:41.090 [2024-11-20 07:02:45.424689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:cccccccc cdw11:cccc0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.090 [2024-11-20 07:02:45.424716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.090 [2024-11-20 07:02:45.424850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:27cccccc cdw11:cccc0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.090 [2024-11-20 07:02:45.424868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.090 [2024-11-20 07:02:45.424994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:cccccccc cdw11:cccc0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.090 [2024-11-20 07:02:45.425012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.090 #52 NEW cov: 12476 ft: 15666 corp: 27/506b lim: 35 exec/s: 52 rss: 75Mb L: 22/35 MS: 1 InsertByte- 00:06:41.090 [2024-11-20 07:02:45.494633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000600 cdw11:06000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.090 [2024-11-20 07:02:45.494662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.090 #53 NEW cov: 12476 ft: 15680 corp: 28/514b lim: 35 exec/s: 53 rss: 75Mb L: 8/35 MS: 1 EraseBytes- 00:06:41.090 [2024-11-20 07:02:45.534425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff8aff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.090 [2024-11-20 07:02:45.534453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.090 #54 NEW cov: 12476 ft: 15696 corp: 29/522b lim: 35 exec/s: 54 rss: 75Mb L: 8/35 MS: 1 EraseBytes- 00:06:41.090 [2024-11-20 07:02:45.604727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff038a cdw11:06010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.090 [2024-11-20 07:02:45.604754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.090 [2024-11-20 07:02:45.645117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff038a cdw11:06010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.090 [2024-11-20 07:02:45.645144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.090 [2024-11-20 07:02:45.645269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ff340008 cdw11:34340000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.090 [2024-11-20 07:02:45.645285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.349 #61 NEW cov: 12476 ft: 15741 corp: 30/536b lim: 35 exec/s: 30 rss: 75Mb L: 14/35 MS: 2 CMP-CrossOver- DE: "\001\000\000\010"- 00:06:41.349 #61 DONE cov: 12476 ft: 15741 corp: 30/536b lim: 35 exec/s: 30 rss: 75Mb 00:06:41.349 ###### Recommended dictionary. ###### 00:06:41.349 "\006\000\000\000" # Uses: 3 00:06:41.349 "\377\377\377\377\377\377\377\377" # Uses: 0 00:06:41.349 "\001\000\000\010" # Uses: 0 00:06:41.349 ###### End of recommended dictionary. ###### 00:06:41.349 Done 61 runs in 2 second(s) 00:06:41.349 07:02:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:06:41.349 07:02:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:41.350 07:02:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:41.350 07:02:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:06:41.350 07:02:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:06:41.350 07:02:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:41.350 07:02:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:41.350 07:02:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:41.350 07:02:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:06:41.350 07:02:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:41.350 07:02:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:41.350 07:02:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:06:41.350 07:02:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:06:41.350 07:02:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:41.350 07:02:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:06:41.350 07:02:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:41.350 07:02:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:41.350 07:02:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:41.350 07:02:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:06:41.350 [2024-11-20 07:02:45.819890] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:06:41.350 [2024-11-20 07:02:45.819954] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3776618 ] 00:06:41.608 [2024-11-20 07:02:46.077355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.608 [2024-11-20 07:02:46.127191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.867 [2024-11-20 07:02:46.186580] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:41.867 [2024-11-20 07:02:46.202903] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:06:41.867 INFO: Running with entropic power schedule (0xFF, 100). 00:06:41.868 INFO: Seed: 3243796517 00:06:41.868 INFO: Loaded 1 modules (388174 inline 8-bit counters): 388174 [0x2c4d34c, 0x2cabf9a), 00:06:41.868 INFO: Loaded 1 PC tables (388174 PCs): 388174 [0x2cabfa0,0x3298480), 00:06:41.868 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:41.868 INFO: A corpus is not provided, starting from an empty corpus 00:06:41.868 #2 INITED exec/s: 0 rss: 65Mb 00:06:41.868 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:41.868 This may also happen if the target rejected all inputs we tried so far 00:06:41.868 [2024-11-20 07:02:46.258829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffd7ff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.868 [2024-11-20 07:02:46.258857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.868 [2024-11-20 07:02:46.258927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.868 [2024-11-20 07:02:46.258942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.868 [2024-11-20 07:02:46.258998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.868 [2024-11-20 07:02:46.259012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.868 [2024-11-20 07:02:46.259066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.868 [2024-11-20 07:02:46.259080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.127 NEW_FUNC[1/716]: 0x443ef8 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:06:42.127 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:42.127 #16 NEW cov: 12260 ft: 12252 corp: 2/39b lim: 45 exec/s: 0 rss: 73Mb L: 38/38 MS: 4 ChangeByte-ShuffleBytes-ChangeByte-InsertRepeatedBytes- 00:06:42.127 [2024-11-20 07:02:46.579099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:04004a01 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.127 [2024-11-20 07:02:46.579132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.127 #23 NEW cov: 12373 ft: 13727 corp: 3/48b lim: 45 exec/s: 0 rss: 73Mb L: 9/38 MS: 2 ChangeBit-CMP- DE: "\001\004\000\000\000\000\000\000"- 00:06:42.127 [2024-11-20 07:02:46.619445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff19d7 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.127 [2024-11-20 07:02:46.619473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.127 [2024-11-20 07:02:46.619527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.127 [2024-11-20 07:02:46.619540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.127 [2024-11-20 07:02:46.619589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.127 [2024-11-20 07:02:46.619607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.127 #30 NEW cov: 12379 ft: 14132 corp: 4/82b lim: 45 exec/s: 0 rss: 73Mb L: 34/38 MS: 2 ChangeByte-CrossOver- 00:06:42.127 [2024-11-20 07:02:46.659732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff19d7 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.127 [2024-11-20 07:02:46.659762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.127 [2024-11-20 07:02:46.659814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.127 [2024-11-20 07:02:46.659828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.127 [2024-11-20 07:02:46.659881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.127 [2024-11-20 07:02:46.659893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.127 [2024-11-20 07:02:46.659944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.127 [2024-11-20 07:02:46.659957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.386 #31 NEW cov: 12464 ft: 14356 corp: 5/120b lim: 45 exec/s: 0 rss: 73Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:06:42.386 [2024-11-20 07:02:46.719413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:044a0100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.386 [2024-11-20 07:02:46.719438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.386 #32 NEW cov: 12464 ft: 14422 corp: 6/129b lim: 45 exec/s: 0 rss: 73Mb L: 9/38 MS: 1 ShuffleBytes- 00:06:42.386 [2024-11-20 07:02:46.779585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:04004a01 cdw11:007a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.386 [2024-11-20 07:02:46.779615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.386 #33 NEW cov: 12464 ft: 14517 corp: 7/139b lim: 45 exec/s: 0 rss: 73Mb L: 10/38 MS: 1 InsertByte- 00:06:42.386 [2024-11-20 07:02:46.819665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:04004a01 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.386 [2024-11-20 07:02:46.819690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.386 #34 NEW cov: 12464 ft: 14564 corp: 8/148b lim: 45 exec/s: 0 rss: 73Mb L: 9/38 MS: 1 CopyPart- 00:06:42.386 [2024-11-20 07:02:46.860255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff19d7 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.386 [2024-11-20 07:02:46.860281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.386 [2024-11-20 07:02:46.860347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffa0ffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.386 [2024-11-20 07:02:46.860361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.386 [2024-11-20 07:02:46.860413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.386 [2024-11-20 07:02:46.860426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.386 [2024-11-20 07:02:46.860476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.386 [2024-11-20 07:02:46.860488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.386 #35 NEW cov: 12464 ft: 14735 corp: 9/186b lim: 45 exec/s: 0 rss: 73Mb L: 38/38 MS: 1 ChangeByte- 00:06:42.386 [2024-11-20 07:02:46.920408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff19d7 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.386 [2024-11-20 07:02:46.920434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.386 [2024-11-20 07:02:46.920502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.386 [2024-11-20 07:02:46.920516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.386 [2024-11-20 07:02:46.920568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.386 [2024-11-20 07:02:46.920582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.386 [2024-11-20 07:02:46.920642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:0400ff01 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.386 [2024-11-20 07:02:46.920656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.644 #36 NEW cov: 12464 ft: 14743 corp: 10/228b lim: 45 exec/s: 0 rss: 73Mb L: 42/42 MS: 1 PersAutoDict- DE: "\001\004\000\000\000\000\000\000"- 00:06:42.644 [2024-11-20 07:02:46.960063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:044a0100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.644 [2024-11-20 07:02:46.960088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.644 #37 NEW cov: 12464 ft: 14810 corp: 11/238b lim: 45 exec/s: 0 rss: 73Mb L: 10/42 MS: 1 InsertByte- 00:06:42.644 [2024-11-20 07:02:47.020418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff19d7 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.644 [2024-11-20 07:02:47.020443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.644 [2024-11-20 07:02:47.020493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.644 [2024-11-20 07:02:47.020506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.644 #38 NEW cov: 12464 ft: 15049 corp: 12/259b lim: 45 exec/s: 0 rss: 73Mb L: 21/42 MS: 1 EraseBytes- 00:06:42.644 [2024-11-20 07:02:47.080480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff19d7 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.644 [2024-11-20 07:02:47.080504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.644 #39 NEW cov: 12464 ft: 15066 corp: 13/273b lim: 45 exec/s: 0 rss: 74Mb L: 14/42 MS: 1 EraseBytes- 00:06:42.644 [2024-11-20 07:02:47.140572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:04004a01 cdw11:007a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.645 [2024-11-20 07:02:47.140603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.645 NEW_FUNC[1/1]: 0x1c34f28 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:42.645 #40 NEW cov: 12487 ft: 15111 corp: 14/283b lim: 45 exec/s: 0 rss: 74Mb L: 10/42 MS: 1 ChangeBit- 00:06:42.903 [2024-11-20 07:02:47.201222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff19d7 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.903 [2024-11-20 07:02:47.201249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.903 [2024-11-20 07:02:47.201303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffa0ffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.903 [2024-11-20 07:02:47.201317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.903 [2024-11-20 07:02:47.201370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:17171717 cdw11:17ff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.903 [2024-11-20 07:02:47.201384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.903 [2024-11-20 07:02:47.201433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.903 [2024-11-20 07:02:47.201446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.903 #41 NEW cov: 12487 ft: 15172 corp: 15/327b lim: 45 exec/s: 0 rss: 74Mb L: 44/44 MS: 1 InsertRepeatedBytes- 00:06:42.903 [2024-11-20 07:02:47.240862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:04004a01 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.903 [2024-11-20 07:02:47.240888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.904 #42 NEW cov: 12487 ft: 15194 corp: 16/336b lim: 45 exec/s: 42 rss: 74Mb L: 9/44 MS: 1 ShuffleBytes- 00:06:42.904 [2024-11-20 07:02:47.301004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:04004a01 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.904 [2024-11-20 07:02:47.301030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.904 #43 NEW cov: 12487 ft: 15208 corp: 17/345b lim: 45 exec/s: 43 rss: 74Mb L: 9/44 MS: 1 ChangeBinInt- 00:06:42.904 [2024-11-20 07:02:47.341121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff19d7 cdw11:abff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.904 [2024-11-20 07:02:47.341146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.904 #44 NEW cov: 12487 ft: 15263 corp: 18/359b lim: 45 exec/s: 44 rss: 74Mb L: 14/44 MS: 1 ChangeByte- 00:06:42.904 [2024-11-20 07:02:47.401689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff19d7 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.904 [2024-11-20 07:02:47.401714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.904 [2024-11-20 07:02:47.401784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.904 [2024-11-20 07:02:47.401798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.904 [2024-11-20 07:02:47.401852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.904 [2024-11-20 07:02:47.401866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.904 #45 NEW cov: 12487 ft: 15266 corp: 19/387b lim: 45 exec/s: 45 rss: 74Mb L: 28/44 MS: 1 EraseBytes- 00:06:42.904 [2024-11-20 07:02:47.441862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:044a0100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.904 [2024-11-20 07:02:47.441887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.904 [2024-11-20 07:02:47.441939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.904 [2024-11-20 07:02:47.441956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.904 [2024-11-20 07:02:47.442006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.904 [2024-11-20 07:02:47.442020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.904 [2024-11-20 07:02:47.442070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.904 [2024-11-20 07:02:47.442083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.163 #46 NEW cov: 12487 ft: 15278 corp: 20/429b lim: 45 exec/s: 46 rss: 74Mb L: 42/44 MS: 1 InsertRepeatedBytes- 00:06:43.163 [2024-11-20 07:02:47.501622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:044ae700 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.163 [2024-11-20 07:02:47.501648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.163 #47 NEW cov: 12487 ft: 15282 corp: 21/439b lim: 45 exec/s: 47 rss: 74Mb L: 10/44 MS: 1 ChangeByte- 00:06:43.163 [2024-11-20 07:02:47.542146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff19d7 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.163 [2024-11-20 07:02:47.542171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.163 [2024-11-20 07:02:47.542225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffa0ffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.163 [2024-11-20 07:02:47.542238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.163 [2024-11-20 07:02:47.542289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.163 [2024-11-20 07:02:47.542302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.163 [2024-11-20 07:02:47.542354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.163 [2024-11-20 07:02:47.542367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.163 #48 NEW cov: 12487 ft: 15303 corp: 22/483b lim: 45 exec/s: 48 rss: 74Mb L: 44/44 MS: 1 CopyPart- 00:06:43.163 [2024-11-20 07:02:47.582291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff19d7 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.163 [2024-11-20 07:02:47.582315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.163 [2024-11-20 07:02:47.582381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffa0ffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.163 [2024-11-20 07:02:47.582395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.163 [2024-11-20 07:02:47.582445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.163 [2024-11-20 07:02:47.582459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.163 [2024-11-20 07:02:47.582509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.163 [2024-11-20 07:02:47.582525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.163 #49 NEW cov: 12487 ft: 15345 corp: 23/527b lim: 45 exec/s: 49 rss: 74Mb L: 44/44 MS: 1 ChangeBit- 00:06:43.163 [2024-11-20 07:02:47.641992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:044a0100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.163 [2024-11-20 07:02:47.642016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.163 #50 NEW cov: 12487 ft: 15356 corp: 24/536b lim: 45 exec/s: 50 rss: 74Mb L: 9/44 MS: 1 ChangeBinInt- 00:06:43.163 [2024-11-20 07:02:47.682093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff01d7 cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.163 [2024-11-20 07:02:47.682117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.163 #51 NEW cov: 12487 ft: 15363 corp: 25/545b lim: 45 exec/s: 51 rss: 74Mb L: 9/44 MS: 1 CrossOver- 00:06:43.422 [2024-11-20 07:02:47.722225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:01040100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.422 [2024-11-20 07:02:47.722250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.422 #52 NEW cov: 12487 ft: 15373 corp: 26/555b lim: 45 exec/s: 52 rss: 74Mb L: 10/44 MS: 1 PersAutoDict- DE: "\001\004\000\000\000\000\000\000"- 00:06:43.422 [2024-11-20 07:02:47.762318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:04004a01 cdw11:00800000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.422 [2024-11-20 07:02:47.762343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.422 #53 NEW cov: 12487 ft: 15396 corp: 27/564b lim: 45 exec/s: 53 rss: 74Mb L: 9/44 MS: 1 ChangeBit- 00:06:43.422 [2024-11-20 07:02:47.802927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff19d7 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.422 [2024-11-20 07:02:47.802952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.422 [2024-11-20 07:02:47.803002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.422 [2024-11-20 07:02:47.803016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.422 [2024-11-20 07:02:47.803067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.422 [2024-11-20 07:02:47.803080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.422 [2024-11-20 07:02:47.803130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:0400ff01 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.422 [2024-11-20 07:02:47.803143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.422 #54 NEW cov: 12487 ft: 15417 corp: 28/607b lim: 45 exec/s: 54 rss: 74Mb L: 43/44 MS: 1 InsertByte- 00:06:43.422 [2024-11-20 07:02:47.863126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff19d7 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.422 [2024-11-20 07:02:47.863150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.422 [2024-11-20 07:02:47.863219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.422 [2024-11-20 07:02:47.863235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.422 [2024-11-20 07:02:47.863288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.422 [2024-11-20 07:02:47.863301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.422 [2024-11-20 07:02:47.863349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:0400ff01 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.422 [2024-11-20 07:02:47.863362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.422 #55 NEW cov: 12487 ft: 15434 corp: 29/650b lim: 45 exec/s: 55 rss: 75Mb L: 43/44 MS: 1 ShuffleBytes- 00:06:43.423 [2024-11-20 07:02:47.923231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff19d7 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.423 [2024-11-20 07:02:47.923256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.423 [2024-11-20 07:02:47.923308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.423 [2024-11-20 07:02:47.923322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.423 [2024-11-20 07:02:47.923372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.423 [2024-11-20 07:02:47.923385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.423 [2024-11-20 07:02:47.923435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.423 [2024-11-20 07:02:47.923448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.423 #56 NEW cov: 12487 ft: 15439 corp: 30/692b lim: 45 exec/s: 56 rss: 75Mb L: 42/44 MS: 1 PersAutoDict- DE: "\001\004\000\000\000\000\000\000"- 00:06:43.423 [2024-11-20 07:02:47.963401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff19d7 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.423 [2024-11-20 07:02:47.963426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.423 [2024-11-20 07:02:47.963493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffefff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.423 [2024-11-20 07:02:47.963507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.423 [2024-11-20 07:02:47.963561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.423 [2024-11-20 07:02:47.963574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.423 [2024-11-20 07:02:47.963631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.423 [2024-11-20 07:02:47.963645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.682 #57 NEW cov: 12487 ft: 15476 corp: 31/730b lim: 45 exec/s: 57 rss: 75Mb L: 38/44 MS: 1 ChangeBit- 00:06:43.682 [2024-11-20 07:02:48.003152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:04004a01 cdw11:00190006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.682 [2024-11-20 07:02:48.003181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.682 [2024-11-20 07:02:48.003234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffabff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.682 [2024-11-20 07:02:48.003248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.682 #58 NEW cov: 12487 ft: 15494 corp: 32/753b lim: 45 exec/s: 58 rss: 75Mb L: 23/44 MS: 1 CrossOver- 00:06:43.682 [2024-11-20 07:02:48.063333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff19d7 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.682 [2024-11-20 07:02:48.063358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.682 [2024-11-20 07:02:48.063410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.682 [2024-11-20 07:02:48.063423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.682 #59 NEW cov: 12487 ft: 15498 corp: 33/776b lim: 45 exec/s: 59 rss: 75Mb L: 23/44 MS: 1 EraseBytes- 00:06:43.683 [2024-11-20 07:02:48.103276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:4a000100 cdw11:04000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.683 [2024-11-20 07:02:48.103301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.683 #60 NEW cov: 12487 ft: 15565 corp: 34/785b lim: 45 exec/s: 60 rss: 75Mb L: 9/44 MS: 1 ShuffleBytes- 00:06:43.683 [2024-11-20 07:02:48.143871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:044a0100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.683 [2024-11-20 07:02:48.143896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.683 [2024-11-20 07:02:48.143949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.683 [2024-11-20 07:02:48.143962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.683 [2024-11-20 07:02:48.144011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.683 [2024-11-20 07:02:48.144025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.683 [2024-11-20 07:02:48.144074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.683 [2024-11-20 07:02:48.144087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.683 #61 NEW cov: 12487 ft: 15593 corp: 35/827b lim: 45 exec/s: 61 rss: 75Mb L: 42/44 MS: 1 CopyPart- 00:06:43.683 [2024-11-20 07:02:48.204008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff19d7 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.683 [2024-11-20 07:02:48.204033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.683 [2024-11-20 07:02:48.204088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffa0ffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.683 [2024-11-20 07:02:48.204101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.683 [2024-11-20 07:02:48.204154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.683 [2024-11-20 07:02:48.204167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.683 [2024-11-20 07:02:48.204217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.683 [2024-11-20 07:02:48.204230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.683 #62 NEW cov: 12487 ft: 15599 corp: 36/870b lim: 45 exec/s: 62 rss: 75Mb L: 43/44 MS: 1 CrossOver- 00:06:43.942 [2024-11-20 07:02:48.244270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff19d7 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.942 [2024-11-20 07:02:48.244296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.942 [2024-11-20 07:02:48.244350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffa0ffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.942 [2024-11-20 07:02:48.244363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.942 [2024-11-20 07:02:48.244415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:3aff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.942 [2024-11-20 07:02:48.244445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.943 [2024-11-20 07:02:48.244494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.943 [2024-11-20 07:02:48.244507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.943 [2024-11-20 07:02:48.244558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ff7b0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.943 [2024-11-20 07:02:48.244571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:43.943 #63 NEW cov: 12487 ft: 15649 corp: 37/915b lim: 45 exec/s: 31 rss: 75Mb L: 45/45 MS: 1 InsertByte- 00:06:43.943 #63 DONE cov: 12487 ft: 15649 corp: 37/915b lim: 45 exec/s: 31 rss: 75Mb 00:06:43.943 ###### Recommended dictionary. ###### 00:06:43.943 "\001\004\000\000\000\000\000\000" # Uses: 3 00:06:43.943 ###### End of recommended dictionary. ###### 00:06:43.943 Done 63 runs in 2 second(s) 00:06:43.943 07:02:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:06:43.943 07:02:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:43.943 07:02:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:43.943 07:02:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:06:43.943 07:02:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:06:43.943 07:02:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:43.943 07:02:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:43.943 07:02:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:43.943 07:02:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:06:43.943 07:02:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:43.943 07:02:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:43.943 07:02:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:06:43.943 07:02:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:06:43.943 07:02:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:43.943 07:02:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:06:43.943 07:02:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:43.943 07:02:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:43.943 07:02:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:43.943 07:02:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:06:43.943 [2024-11-20 07:02:48.440594] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:06:43.943 [2024-11-20 07:02:48.440673] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3777135 ] 00:06:44.202 [2024-11-20 07:02:48.704620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.461 [2024-11-20 07:02:48.765928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.461 [2024-11-20 07:02:48.825094] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:44.461 [2024-11-20 07:02:48.841430] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:06:44.461 INFO: Running with entropic power schedule (0xFF, 100). 00:06:44.461 INFO: Seed: 1586853671 00:06:44.461 INFO: Loaded 1 modules (388174 inline 8-bit counters): 388174 [0x2c4d34c, 0x2cabf9a), 00:06:44.461 INFO: Loaded 1 PC tables (388174 PCs): 388174 [0x2cabfa0,0x3298480), 00:06:44.461 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:44.461 INFO: A corpus is not provided, starting from an empty corpus 00:06:44.461 #2 INITED exec/s: 0 rss: 65Mb 00:06:44.461 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:44.461 This may also happen if the target rejected all inputs we tried so far 00:06:44.461 [2024-11-20 07:02:48.886173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000860a cdw11:00000000 00:06:44.461 [2024-11-20 07:02:48.886209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.720 NEW_FUNC[1/714]: 0x446708 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:06:44.720 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:44.720 #5 NEW cov: 12177 ft: 12165 corp: 2/3b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 3 ShuffleBytes-ShuffleBytes-InsertByte- 00:06:44.720 [2024-11-20 07:02:49.237097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000060a cdw11:00000000 00:06:44.720 [2024-11-20 07:02:49.237138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.980 #6 NEW cov: 12290 ft: 12829 corp: 3/5b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 ChangeBit- 00:06:44.980 [2024-11-20 07:02:49.327266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:44.980 [2024-11-20 07:02:49.327298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.980 [2024-11-20 07:02:49.327328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:44.980 [2024-11-20 07:02:49.327348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.980 #11 NEW cov: 12296 ft: 13270 corp: 4/10b lim: 10 exec/s: 0 rss: 73Mb L: 5/5 MS: 5 EraseBytes-ChangeByte-CopyPart-CrossOver-InsertRepeatedBytes- 00:06:44.980 [2024-11-20 07:02:49.407456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:44.980 [2024-11-20 07:02:49.407485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.980 [2024-11-20 07:02:49.407531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:44.980 [2024-11-20 07:02:49.407546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.980 [2024-11-20 07:02:49.407573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000002d cdw11:00000000 00:06:44.980 [2024-11-20 07:02:49.407588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.980 #12 NEW cov: 12381 ft: 13654 corp: 5/16b lim: 10 exec/s: 0 rss: 73Mb L: 6/6 MS: 1 InsertByte- 00:06:44.980 [2024-11-20 07:02:49.497776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:44.980 [2024-11-20 07:02:49.497809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.980 [2024-11-20 07:02:49.497856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:44.980 [2024-11-20 07:02:49.497872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.980 [2024-11-20 07:02:49.497900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000002d cdw11:00000000 00:06:44.980 [2024-11-20 07:02:49.497915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.239 #13 NEW cov: 12381 ft: 13727 corp: 6/22b lim: 10 exec/s: 0 rss: 73Mb L: 6/6 MS: 1 ShuffleBytes- 00:06:45.239 [2024-11-20 07:02:49.587915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003d0a cdw11:00000000 00:06:45.239 [2024-11-20 07:02:49.587946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.239 [2024-11-20 07:02:49.587991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:45.239 [2024-11-20 07:02:49.588007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.239 [2024-11-20 07:02:49.588034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:45.239 [2024-11-20 07:02:49.588050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.239 #14 NEW cov: 12381 ft: 13823 corp: 7/28b lim: 10 exec/s: 0 rss: 73Mb L: 6/6 MS: 1 InsertByte- 00:06:45.239 [2024-11-20 07:02:49.638146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:45.239 [2024-11-20 07:02:49.638176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.239 [2024-11-20 07:02:49.638221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 00:06:45.239 [2024-11-20 07:02:49.638237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.239 [2024-11-20 07:02:49.638264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:45.239 [2024-11-20 07:02:49.638284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.239 [2024-11-20 07:02:49.638311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:45.239 [2024-11-20 07:02:49.638326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.239 [2024-11-20 07:02:49.638352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:06:45.239 [2024-11-20 07:02:49.638367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:45.239 #15 NEW cov: 12381 ft: 14183 corp: 8/38b lim: 10 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 CopyPart- 00:06:45.239 [2024-11-20 07:02:49.698151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:45.239 [2024-11-20 07:02:49.698182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.239 #16 NEW cov: 12381 ft: 14261 corp: 9/41b lim: 10 exec/s: 0 rss: 73Mb L: 3/10 MS: 1 EraseBytes- 00:06:45.239 [2024-11-20 07:02:49.748261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002186 cdw11:00000000 00:06:45.239 [2024-11-20 07:02:49.748291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.239 NEW_FUNC[1/1]: 0x1c34f28 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:45.239 #17 NEW cov: 12398 ft: 14365 corp: 10/44b lim: 10 exec/s: 0 rss: 73Mb L: 3/10 MS: 1 InsertByte- 00:06:45.499 [2024-11-20 07:02:49.799306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:45.499 [2024-11-20 07:02:49.799333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.499 [2024-11-20 07:02:49.799386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:45.499 [2024-11-20 07:02:49.799399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.499 [2024-11-20 07:02:49.799450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000002d cdw11:00000000 00:06:45.499 [2024-11-20 07:02:49.799463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.499 #18 NEW cov: 12398 ft: 14483 corp: 11/51b lim: 10 exec/s: 0 rss: 73Mb L: 7/10 MS: 1 InsertByte- 00:06:45.499 [2024-11-20 07:02:49.859463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:45.499 [2024-11-20 07:02:49.859490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.499 [2024-11-20 07:02:49.859546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:00000000 00:06:45.499 [2024-11-20 07:02:49.859559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.499 [2024-11-20 07:02:49.859613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff01 cdw11:00000000 00:06:45.499 [2024-11-20 07:02:49.859626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.499 #19 NEW cov: 12398 ft: 14571 corp: 12/58b lim: 10 exec/s: 19 rss: 73Mb L: 7/10 MS: 1 CMP- DE: "\377\377\001\000"- 00:06:45.499 [2024-11-20 07:02:49.919663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:45.499 [2024-11-20 07:02:49.919691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.499 [2024-11-20 07:02:49.919761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:45.499 [2024-11-20 07:02:49.919775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.499 [2024-11-20 07:02:49.919829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:45.499 [2024-11-20 07:02:49.919842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.499 #20 NEW cov: 12398 ft: 14583 corp: 13/64b lim: 10 exec/s: 20 rss: 73Mb L: 6/10 MS: 1 EraseBytes- 00:06:45.499 [2024-11-20 07:02:49.980042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:45.499 [2024-11-20 07:02:49.980067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.499 [2024-11-20 07:02:49.980121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 00:06:45.499 [2024-11-20 07:02:49.980135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.499 [2024-11-20 07:02:49.980188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:45.499 [2024-11-20 07:02:49.980201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.499 [2024-11-20 07:02:49.980253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:45.499 [2024-11-20 07:02:49.980266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.499 [2024-11-20 07:02:49.980316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:06:45.499 [2024-11-20 07:02:49.980329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:45.499 #21 NEW cov: 12398 ft: 14668 corp: 14/74b lim: 10 exec/s: 21 rss: 73Mb L: 10/10 MS: 1 CopyPart- 00:06:45.499 [2024-11-20 07:02:50.020056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:45.499 [2024-11-20 07:02:50.020083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.499 [2024-11-20 07:02:50.020140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:45.499 [2024-11-20 07:02:50.020154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.499 [2024-11-20 07:02:50.020206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00002c00 cdw11:00000000 00:06:45.499 [2024-11-20 07:02:50.020220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.499 [2024-11-20 07:02:50.020274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00002d45 cdw11:00000000 00:06:45.499 [2024-11-20 07:02:50.020288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.499 #22 NEW cov: 12398 ft: 14754 corp: 15/82b lim: 10 exec/s: 22 rss: 73Mb L: 8/10 MS: 1 InsertByte- 00:06:45.759 [2024-11-20 07:02:50.059808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000218e cdw11:00000000 00:06:45.759 [2024-11-20 07:02:50.059833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.759 #23 NEW cov: 12398 ft: 14825 corp: 16/85b lim: 10 exec/s: 23 rss: 73Mb L: 3/10 MS: 1 ChangeBit- 00:06:45.759 [2024-11-20 07:02:50.120208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002186 cdw11:00000000 00:06:45.759 [2024-11-20 07:02:50.120237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.759 [2024-11-20 07:02:50.120294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a21 cdw11:00000000 00:06:45.759 [2024-11-20 07:02:50.120308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.759 #24 NEW cov: 12398 ft: 14842 corp: 17/89b lim: 10 exec/s: 24 rss: 73Mb L: 4/10 MS: 1 CopyPart- 00:06:45.759 [2024-11-20 07:02:50.160568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e00 cdw11:00000000 00:06:45.759 [2024-11-20 07:02:50.160594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.759 [2024-11-20 07:02:50.160670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 00:06:45.759 [2024-11-20 07:02:50.160684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.759 [2024-11-20 07:02:50.160737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:45.759 [2024-11-20 07:02:50.160750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.759 [2024-11-20 07:02:50.160804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:45.759 [2024-11-20 07:02:50.160817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.759 [2024-11-20 07:02:50.160871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:06:45.759 [2024-11-20 07:02:50.160884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:45.759 #25 NEW cov: 12398 ft: 14852 corp: 18/99b lim: 10 exec/s: 25 rss: 73Mb L: 10/10 MS: 1 ChangeByte- 00:06:45.759 [2024-11-20 07:02:50.220537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.759 [2024-11-20 07:02:50.220562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.759 [2024-11-20 07:02:50.220619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000100 cdw11:00000000 00:06:45.759 [2024-11-20 07:02:50.220633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.759 [2024-11-20 07:02:50.220684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff01 cdw11:00000000 00:06:45.759 [2024-11-20 07:02:50.220697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.759 #26 NEW cov: 12398 ft: 14897 corp: 19/106b lim: 10 exec/s: 26 rss: 73Mb L: 7/10 MS: 1 PersAutoDict- DE: "\377\377\001\000"- 00:06:45.759 [2024-11-20 07:02:50.280548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000606 cdw11:00000000 00:06:45.759 [2024-11-20 07:02:50.280573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.759 [2024-11-20 07:02:50.280645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:45.759 [2024-11-20 07:02:50.280659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.759 #27 NEW cov: 12398 ft: 14929 corp: 20/110b lim: 10 exec/s: 27 rss: 73Mb L: 4/10 MS: 1 CopyPart- 00:06:46.018 [2024-11-20 07:02:50.320769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:46.018 [2024-11-20 07:02:50.320794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.018 [2024-11-20 07:02:50.320851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:46.018 [2024-11-20 07:02:50.320864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.018 [2024-11-20 07:02:50.320916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:46.018 [2024-11-20 07:02:50.320929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.018 #28 NEW cov: 12398 ft: 14972 corp: 21/117b lim: 10 exec/s: 28 rss: 73Mb L: 7/10 MS: 1 EraseBytes- 00:06:46.018 [2024-11-20 07:02:50.360653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:46.018 [2024-11-20 07:02:50.360679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.018 #29 NEW cov: 12398 ft: 15002 corp: 22/119b lim: 10 exec/s: 29 rss: 73Mb L: 2/10 MS: 1 EraseBytes- 00:06:46.018 [2024-11-20 07:02:50.421080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 00:06:46.018 [2024-11-20 07:02:50.421105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.018 [2024-11-20 07:02:50.421159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:46.019 [2024-11-20 07:02:50.421173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.019 [2024-11-20 07:02:50.421226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000002d cdw11:00000000 00:06:46.019 [2024-11-20 07:02:50.421239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.019 #30 NEW cov: 12398 ft: 15023 corp: 23/125b lim: 10 exec/s: 30 rss: 73Mb L: 6/10 MS: 1 ShuffleBytes- 00:06:46.019 [2024-11-20 07:02:50.460935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a21 cdw11:00000000 00:06:46.019 [2024-11-20 07:02:50.460960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.019 #31 NEW cov: 12398 ft: 15058 corp: 24/127b lim: 10 exec/s: 31 rss: 74Mb L: 2/10 MS: 1 EraseBytes- 00:06:46.019 [2024-11-20 07:02:50.521648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:46.019 [2024-11-20 07:02:50.521672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.019 [2024-11-20 07:02:50.521728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000001ff cdw11:00000000 00:06:46.019 [2024-11-20 07:02:50.521742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.019 [2024-11-20 07:02:50.521799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff01 cdw11:00000000 00:06:46.019 [2024-11-20 07:02:50.521812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.019 [2024-11-20 07:02:50.521864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:000000ff cdw11:00000000 00:06:46.019 [2024-11-20 07:02:50.521880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.019 [2024-11-20 07:02:50.521932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000100 cdw11:00000000 00:06:46.019 [2024-11-20 07:02:50.521946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:46.019 #32 NEW cov: 12398 ft: 15089 corp: 25/137b lim: 10 exec/s: 32 rss: 74Mb L: 10/10 MS: 1 CopyPart- 00:06:46.279 [2024-11-20 07:02:50.581649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001600 cdw11:00000000 00:06:46.279 [2024-11-20 07:02:50.581674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.279 [2024-11-20 07:02:50.581727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00003d0a cdw11:00000000 00:06:46.279 [2024-11-20 07:02:50.581741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.279 [2024-11-20 07:02:50.581794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:46.279 [2024-11-20 07:02:50.581807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.279 [2024-11-20 07:02:50.581859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:46.279 [2024-11-20 07:02:50.581872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.279 #33 NEW cov: 12398 ft: 15116 corp: 26/145b lim: 10 exec/s: 33 rss: 74Mb L: 8/10 MS: 1 CMP- DE: "\026\000"- 00:06:46.279 [2024-11-20 07:02:50.641940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:46.279 [2024-11-20 07:02:50.641965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.279 [2024-11-20 07:02:50.642015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 00:06:46.279 [2024-11-20 07:02:50.642028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.279 [2024-11-20 07:02:50.642080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:46.279 [2024-11-20 07:02:50.642093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.279 [2024-11-20 07:02:50.642147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:46.279 [2024-11-20 07:02:50.642160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.279 [2024-11-20 07:02:50.642212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:06:46.279 [2024-11-20 07:02:50.642225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:46.279 #34 NEW cov: 12398 ft: 15154 corp: 27/155b lim: 10 exec/s: 34 rss: 74Mb L: 10/10 MS: 1 ShuffleBytes- 00:06:46.279 [2024-11-20 07:02:50.681908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a93 cdw11:00000000 00:06:46.279 [2024-11-20 07:02:50.681934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.279 [2024-11-20 07:02:50.681987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009393 cdw11:00000000 00:06:46.279 [2024-11-20 07:02:50.682001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.279 [2024-11-20 07:02:50.682059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00009393 cdw11:00000000 00:06:46.279 [2024-11-20 07:02:50.682073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.279 [2024-11-20 07:02:50.682136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00009393 cdw11:00000000 00:06:46.279 [2024-11-20 07:02:50.682149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.279 #36 NEW cov: 12398 ft: 15162 corp: 28/164b lim: 10 exec/s: 36 rss: 74Mb L: 9/10 MS: 2 EraseBytes-InsertRepeatedBytes- 00:06:46.279 [2024-11-20 07:02:50.742207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e00 cdw11:00000000 00:06:46.279 [2024-11-20 07:02:50.742232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.279 [2024-11-20 07:02:50.742287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 00:06:46.279 [2024-11-20 07:02:50.742300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.279 [2024-11-20 07:02:50.742362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000bd00 cdw11:00000000 00:06:46.279 [2024-11-20 07:02:50.742374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.279 [2024-11-20 07:02:50.742428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:46.279 [2024-11-20 07:02:50.742441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.279 [2024-11-20 07:02:50.742493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:06:46.279 [2024-11-20 07:02:50.742506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:46.279 #37 NEW cov: 12405 ft: 15198 corp: 29/174b lim: 10 exec/s: 37 rss: 74Mb L: 10/10 MS: 1 ChangeByte- 00:06:46.279 [2024-11-20 07:02:50.801874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000624 cdw11:00000000 00:06:46.279 [2024-11-20 07:02:50.801899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.279 #38 NEW cov: 12405 ft: 15203 corp: 30/176b lim: 10 exec/s: 38 rss: 74Mb L: 2/10 MS: 1 ChangeByte- 00:06:46.538 [2024-11-20 07:02:50.842483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:46.538 [2024-11-20 07:02:50.842509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.538 [2024-11-20 07:02:50.842563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 00:06:46.538 [2024-11-20 07:02:50.842576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.538 [2024-11-20 07:02:50.842608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:46.538 [2024-11-20 07:02:50.842619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.538 [2024-11-20 07:02:50.842674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:46.538 [2024-11-20 07:02:50.842688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.538 [2024-11-20 07:02:50.842746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000010 cdw11:00000000 00:06:46.538 [2024-11-20 07:02:50.842759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:46.538 #39 NEW cov: 12405 ft: 15206 corp: 31/186b lim: 10 exec/s: 39 rss: 74Mb L: 10/10 MS: 1 ChangeBit- 00:06:46.538 [2024-11-20 07:02:50.882128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f306 cdw11:00000000 00:06:46.538 [2024-11-20 07:02:50.882153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.538 #40 NEW cov: 12405 ft: 15228 corp: 32/189b lim: 10 exec/s: 20 rss: 74Mb L: 3/10 MS: 1 InsertByte- 00:06:46.538 #40 DONE cov: 12405 ft: 15228 corp: 32/189b lim: 10 exec/s: 20 rss: 74Mb 00:06:46.538 ###### Recommended dictionary. ###### 00:06:46.538 "\377\377\001\000" # Uses: 1 00:06:46.538 "\026\000" # Uses: 0 00:06:46.538 ###### End of recommended dictionary. ###### 00:06:46.538 Done 40 runs in 2 second(s) 00:06:46.538 07:02:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:06:46.538 07:02:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:46.539 07:02:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:46.539 07:02:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:06:46.539 07:02:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:06:46.539 07:02:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:46.539 07:02:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:46.539 07:02:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:46.539 07:02:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:06:46.539 07:02:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:46.539 07:02:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:46.539 07:02:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:06:46.539 07:02:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:06:46.539 07:02:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:46.539 07:02:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:06:46.539 07:02:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:46.539 07:02:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:46.539 07:02:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:46.539 07:02:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:06:46.539 [2024-11-20 07:02:51.080068] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:06:46.539 [2024-11-20 07:02:51.080157] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3777458 ] 00:06:46.798 [2024-11-20 07:02:51.335742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.057 [2024-11-20 07:02:51.392000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.057 [2024-11-20 07:02:51.451395] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:47.057 [2024-11-20 07:02:51.467748] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:06:47.057 INFO: Running with entropic power schedule (0xFF, 100). 00:06:47.057 INFO: Seed: 4212836278 00:06:47.057 INFO: Loaded 1 modules (388174 inline 8-bit counters): 388174 [0x2c4d34c, 0x2cabf9a), 00:06:47.057 INFO: Loaded 1 PC tables (388174 PCs): 388174 [0x2cabfa0,0x3298480), 00:06:47.057 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:47.057 INFO: A corpus is not provided, starting from an empty corpus 00:06:47.057 #2 INITED exec/s: 0 rss: 65Mb 00:06:47.057 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:47.057 This may also happen if the target rejected all inputs we tried so far 00:06:47.057 [2024-11-20 07:02:51.523521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:47.057 [2024-11-20 07:02:51.523551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.057 [2024-11-20 07:02:51.523607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:47.057 [2024-11-20 07:02:51.523622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.057 [2024-11-20 07:02:51.523670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:47.057 [2024-11-20 07:02:51.523684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.057 [2024-11-20 07:02:51.523733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 00:06:47.057 [2024-11-20 07:02:51.523746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.315 NEW_FUNC[1/714]: 0x447108 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:06:47.315 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:47.315 #4 NEW cov: 12177 ft: 12175 corp: 2/9b lim: 10 exec/s: 0 rss: 73Mb L: 8/8 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:06:47.315 [2024-11-20 07:02:51.843969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e121 cdw11:00000000 00:06:47.315 [2024-11-20 07:02:51.844002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.315 #6 NEW cov: 12290 ft: 13238 corp: 3/11b lim: 10 exec/s: 0 rss: 73Mb L: 2/8 MS: 2 ChangeByte-InsertByte- 00:06:47.574 [2024-11-20 07:02:51.884322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:47.574 [2024-11-20 07:02:51.884348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.574 [2024-11-20 07:02:51.884415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000080 cdw11:00000000 00:06:47.574 [2024-11-20 07:02:51.884429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.574 [2024-11-20 07:02:51.884480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:47.574 [2024-11-20 07:02:51.884494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.574 [2024-11-20 07:02:51.884542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 00:06:47.574 [2024-11-20 07:02:51.884555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.574 #7 NEW cov: 12296 ft: 13455 corp: 4/19b lim: 10 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 ChangeBit- 00:06:47.574 [2024-11-20 07:02:51.944546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:47.574 [2024-11-20 07:02:51.944572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.574 [2024-11-20 07:02:51.944628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:47.574 [2024-11-20 07:02:51.944642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.574 [2024-11-20 07:02:51.944694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:47.574 [2024-11-20 07:02:51.944706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.574 [2024-11-20 07:02:51.944758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 00:06:47.574 [2024-11-20 07:02:51.944771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.574 #8 NEW cov: 12381 ft: 13677 corp: 5/27b lim: 10 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 ShuffleBytes- 00:06:47.574 [2024-11-20 07:02:51.984230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ae1 cdw11:00000000 00:06:47.574 [2024-11-20 07:02:51.984255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.574 #11 NEW cov: 12381 ft: 13843 corp: 6/30b lim: 10 exec/s: 0 rss: 73Mb L: 3/8 MS: 3 ShuffleBytes-ShuffleBytes-CrossOver- 00:06:47.574 [2024-11-20 07:02:52.024744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004700 cdw11:00000000 00:06:47.574 [2024-11-20 07:02:52.024769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.574 [2024-11-20 07:02:52.024838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:47.574 [2024-11-20 07:02:52.024851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.574 [2024-11-20 07:02:52.024902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:47.574 [2024-11-20 07:02:52.024915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.574 [2024-11-20 07:02:52.024965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:47.574 [2024-11-20 07:02:52.024979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.574 #12 NEW cov: 12381 ft: 13902 corp: 7/38b lim: 10 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 CMP- DE: "G\000\000\000\000\000\000\000"- 00:06:47.574 [2024-11-20 07:02:52.084525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 00:06:47.574 [2024-11-20 07:02:52.084550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.574 #15 NEW cov: 12381 ft: 14053 corp: 8/40b lim: 10 exec/s: 0 rss: 73Mb L: 2/8 MS: 3 ShuffleBytes-CrossOver-InsertByte- 00:06:47.574 [2024-11-20 07:02:52.124632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:47.574 [2024-11-20 07:02:52.124657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.833 #17 NEW cov: 12381 ft: 14112 corp: 9/42b lim: 10 exec/s: 0 rss: 73Mb L: 2/8 MS: 2 ShuffleBytes-CrossOver- 00:06:47.833 [2024-11-20 07:02:52.165168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:47.833 [2024-11-20 07:02:52.165197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.833 [2024-11-20 07:02:52.165265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00004700 cdw11:00000000 00:06:47.833 [2024-11-20 07:02:52.165278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.833 [2024-11-20 07:02:52.165335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:47.833 [2024-11-20 07:02:52.165348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.833 [2024-11-20 07:02:52.165399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:47.833 [2024-11-20 07:02:52.165413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.833 #18 NEW cov: 12381 ft: 14157 corp: 10/50b lim: 10 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 ShuffleBytes- 00:06:47.833 [2024-11-20 07:02:52.224924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a5b cdw11:00000000 00:06:47.833 [2024-11-20 07:02:52.224949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.833 #19 NEW cov: 12381 ft: 14234 corp: 11/52b lim: 10 exec/s: 0 rss: 73Mb L: 2/8 MS: 1 InsertByte- 00:06:47.833 [2024-11-20 07:02:52.265408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:47.833 [2024-11-20 07:02:52.265433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.833 [2024-11-20 07:02:52.265501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00004700 cdw11:00000000 00:06:47.833 [2024-11-20 07:02:52.265514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.833 [2024-11-20 07:02:52.265567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000060 cdw11:00000000 00:06:47.833 [2024-11-20 07:02:52.265580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.833 [2024-11-20 07:02:52.265638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:47.833 [2024-11-20 07:02:52.265651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.833 #20 NEW cov: 12381 ft: 14296 corp: 12/60b lim: 10 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 ChangeByte- 00:06:47.833 [2024-11-20 07:02:52.325611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004700 cdw11:00000000 00:06:47.833 [2024-11-20 07:02:52.325636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.833 [2024-11-20 07:02:52.325703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:47.833 [2024-11-20 07:02:52.325717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.833 [2024-11-20 07:02:52.325771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:47.833 [2024-11-20 07:02:52.325784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.833 [2024-11-20 07:02:52.325835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:47.833 [2024-11-20 07:02:52.325848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.833 #21 NEW cov: 12381 ft: 14316 corp: 13/68b lim: 10 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 ShuffleBytes- 00:06:47.833 [2024-11-20 07:02:52.365380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ac0 cdw11:00000000 00:06:47.833 [2024-11-20 07:02:52.365406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.092 NEW_FUNC[1/1]: 0x1c34f28 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:48.092 #22 NEW cov: 12404 ft: 14389 corp: 14/71b lim: 10 exec/s: 0 rss: 74Mb L: 3/8 MS: 1 InsertByte- 00:06:48.092 [2024-11-20 07:02:52.426000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:48.092 [2024-11-20 07:02:52.426025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.092 [2024-11-20 07:02:52.426093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:48.092 [2024-11-20 07:02:52.426107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.092 [2024-11-20 07:02:52.426156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:48.092 [2024-11-20 07:02:52.426170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.092 [2024-11-20 07:02:52.426220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:48.092 [2024-11-20 07:02:52.426233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.092 [2024-11-20 07:02:52.426284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:48.092 [2024-11-20 07:02:52.426298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:48.092 #23 NEW cov: 12404 ft: 14427 corp: 15/81b lim: 10 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:06:48.092 [2024-11-20 07:02:52.466013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.092 [2024-11-20 07:02:52.466038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.092 [2024-11-20 07:02:52.466107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00004700 cdw11:00000000 00:06:48.092 [2024-11-20 07:02:52.466120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.092 [2024-11-20 07:02:52.466171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000031 cdw11:00000000 00:06:48.092 [2024-11-20 07:02:52.466184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.092 [2024-11-20 07:02:52.466238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.092 [2024-11-20 07:02:52.466251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.092 #24 NEW cov: 12404 ft: 14435 corp: 16/89b lim: 10 exec/s: 24 rss: 74Mb L: 8/10 MS: 1 ChangeByte- 00:06:48.092 [2024-11-20 07:02:52.525906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:48.092 [2024-11-20 07:02:52.525933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.092 [2024-11-20 07:02:52.526001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00004700 cdw11:00000000 00:06:48.092 [2024-11-20 07:02:52.526021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.092 #26 NEW cov: 12404 ft: 14645 corp: 17/94b lim: 10 exec/s: 26 rss: 74Mb L: 5/10 MS: 2 ShuffleBytes-CrossOver- 00:06:48.092 [2024-11-20 07:02:52.566295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.092 [2024-11-20 07:02:52.566321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.092 [2024-11-20 07:02:52.566373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.092 [2024-11-20 07:02:52.566386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.092 [2024-11-20 07:02:52.566435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.092 [2024-11-20 07:02:52.566448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.092 [2024-11-20 07:02:52.566499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 00:06:48.092 [2024-11-20 07:02:52.566512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.092 #27 NEW cov: 12404 ft: 14662 corp: 18/102b lim: 10 exec/s: 27 rss: 74Mb L: 8/10 MS: 1 ShuffleBytes- 00:06:48.092 [2024-11-20 07:02:52.626438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004700 cdw11:00000000 00:06:48.092 [2024-11-20 07:02:52.626464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.092 [2024-11-20 07:02:52.626517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00004700 cdw11:00000000 00:06:48.092 [2024-11-20 07:02:52.626531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.093 [2024-11-20 07:02:52.626581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.093 [2024-11-20 07:02:52.626594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.093 [2024-11-20 07:02:52.626668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.093 [2024-11-20 07:02:52.626681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.352 #28 NEW cov: 12404 ft: 14681 corp: 19/111b lim: 10 exec/s: 28 rss: 74Mb L: 9/10 MS: 1 CopyPart- 00:06:48.352 [2024-11-20 07:02:52.686608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000200 cdw11:00000000 00:06:48.352 [2024-11-20 07:02:52.686633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.352 [2024-11-20 07:02:52.686700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.352 [2024-11-20 07:02:52.686714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.352 [2024-11-20 07:02:52.686775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.352 [2024-11-20 07:02:52.686804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.352 [2024-11-20 07:02:52.686857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 00:06:48.352 [2024-11-20 07:02:52.686870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.352 #29 NEW cov: 12404 ft: 14703 corp: 20/119b lim: 10 exec/s: 29 rss: 74Mb L: 8/10 MS: 1 ChangeBinInt- 00:06:48.352 [2024-11-20 07:02:52.746694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004700 cdw11:00000000 00:06:48.352 [2024-11-20 07:02:52.746720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.352 [2024-11-20 07:02:52.746785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00004700 cdw11:00000000 00:06:48.352 [2024-11-20 07:02:52.746800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.352 [2024-11-20 07:02:52.746851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.352 [2024-11-20 07:02:52.746865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.352 #30 NEW cov: 12404 ft: 14834 corp: 21/125b lim: 10 exec/s: 30 rss: 74Mb L: 6/10 MS: 1 EraseBytes- 00:06:48.352 [2024-11-20 07:02:52.806588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:48.352 [2024-11-20 07:02:52.806619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.352 #31 NEW cov: 12404 ft: 14886 corp: 22/127b lim: 10 exec/s: 31 rss: 74Mb L: 2/10 MS: 1 ShuffleBytes- 00:06:48.352 [2024-11-20 07:02:52.846683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003b32 cdw11:00000000 00:06:48.352 [2024-11-20 07:02:52.846708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.352 #34 NEW cov: 12404 ft: 14923 corp: 23/129b lim: 10 exec/s: 34 rss: 74Mb L: 2/10 MS: 3 ChangeByte-ShuffleBytes-InsertByte- 00:06:48.352 [2024-11-20 07:02:52.887050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:48.352 [2024-11-20 07:02:52.887076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.352 [2024-11-20 07:02:52.887130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000047 cdw11:00000000 00:06:48.352 [2024-11-20 07:02:52.887143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.352 [2024-11-20 07:02:52.887196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.352 [2024-11-20 07:02:52.887209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.611 #35 NEW cov: 12404 ft: 14940 corp: 24/135b lim: 10 exec/s: 35 rss: 74Mb L: 6/10 MS: 1 CopyPart- 00:06:48.611 [2024-11-20 07:02:52.946960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:48.611 [2024-11-20 07:02:52.946985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.611 #36 NEW cov: 12404 ft: 14951 corp: 25/138b lim: 10 exec/s: 36 rss: 74Mb L: 3/10 MS: 1 InsertByte- 00:06:48.611 [2024-11-20 07:02:52.987455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004700 cdw11:00000000 00:06:48.611 [2024-11-20 07:02:52.987482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.611 [2024-11-20 07:02:52.987550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.611 [2024-11-20 07:02:52.987564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.611 [2024-11-20 07:02:52.987636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.611 [2024-11-20 07:02:52.987650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.611 [2024-11-20 07:02:52.987701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.611 [2024-11-20 07:02:52.987715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.611 #37 NEW cov: 12404 ft: 14970 corp: 26/146b lim: 10 exec/s: 37 rss: 74Mb L: 8/10 MS: 1 PersAutoDict- DE: "G\000\000\000\000\000\000\000"- 00:06:48.611 [2024-11-20 07:02:53.027321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:48.611 [2024-11-20 07:02:53.027346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.611 [2024-11-20 07:02:53.027397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00004700 cdw11:00000000 00:06:48.611 [2024-11-20 07:02:53.027410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.611 #38 NEW cov: 12404 ft: 15022 corp: 27/151b lim: 10 exec/s: 38 rss: 74Mb L: 5/10 MS: 1 ChangeBinInt- 00:06:48.611 [2024-11-20 07:02:53.067562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:06:48.611 [2024-11-20 07:02:53.067587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.611 [2024-11-20 07:02:53.067661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:48.611 [2024-11-20 07:02:53.067675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.611 [2024-11-20 07:02:53.067726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000a50 cdw11:00000000 00:06:48.611 [2024-11-20 07:02:53.067740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.611 #39 NEW cov: 12404 ft: 15039 corp: 28/157b lim: 10 exec/s: 39 rss: 74Mb L: 6/10 MS: 1 InsertRepeatedBytes- 00:06:48.611 [2024-11-20 07:02:53.127839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004700 cdw11:00000000 00:06:48.611 [2024-11-20 07:02:53.127863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.611 [2024-11-20 07:02:53.127930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 00:06:48.611 [2024-11-20 07:02:53.127943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.611 [2024-11-20 07:02:53.127994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.611 [2024-11-20 07:02:53.128008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.611 [2024-11-20 07:02:53.128059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.611 [2024-11-20 07:02:53.128072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.611 #40 NEW cov: 12404 ft: 15057 corp: 29/165b lim: 10 exec/s: 40 rss: 74Mb L: 8/10 MS: 1 ChangeBinInt- 00:06:48.870 [2024-11-20 07:02:53.167606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002e32 cdw11:00000000 00:06:48.870 [2024-11-20 07:02:53.167631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.870 #44 NEW cov: 12404 ft: 15102 corp: 30/167b lim: 10 exec/s: 44 rss: 74Mb L: 2/10 MS: 4 EraseBytes-ChangeByte-ChangeByte-InsertByte- 00:06:48.870 [2024-11-20 07:02:53.227762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a5b cdw11:00000000 00:06:48.870 [2024-11-20 07:02:53.227787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.870 #45 NEW cov: 12404 ft: 15154 corp: 31/169b lim: 10 exec/s: 45 rss: 75Mb L: 2/10 MS: 1 ShuffleBytes- 00:06:48.870 [2024-11-20 07:02:53.288320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004700 cdw11:00000000 00:06:48.870 [2024-11-20 07:02:53.288344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.870 [2024-11-20 07:02:53.288413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.870 [2024-11-20 07:02:53.288427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.870 [2024-11-20 07:02:53.288481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.870 [2024-11-20 07:02:53.288493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.870 [2024-11-20 07:02:53.288544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.870 [2024-11-20 07:02:53.288557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.870 #46 NEW cov: 12404 ft: 15160 corp: 32/177b lim: 10 exec/s: 46 rss: 75Mb L: 8/10 MS: 1 ShuffleBytes- 00:06:48.870 [2024-11-20 07:02:53.348521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.870 [2024-11-20 07:02:53.348546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.870 [2024-11-20 07:02:53.348620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000080 cdw11:00000000 00:06:48.871 [2024-11-20 07:02:53.348635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.871 [2024-11-20 07:02:53.348687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.871 [2024-11-20 07:02:53.348700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.871 [2024-11-20 07:02:53.348752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00002e00 cdw11:00000000 00:06:48.871 [2024-11-20 07:02:53.348766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.871 #47 NEW cov: 12404 ft: 15172 corp: 33/186b lim: 10 exec/s: 47 rss: 75Mb L: 9/10 MS: 1 InsertByte- 00:06:48.871 [2024-11-20 07:02:53.388442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008aff cdw11:00000000 00:06:48.871 [2024-11-20 07:02:53.388468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.871 [2024-11-20 07:02:53.388521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:48.871 [2024-11-20 07:02:53.388534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.871 [2024-11-20 07:02:53.388586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000a50 cdw11:00000000 00:06:48.871 [2024-11-20 07:02:53.388607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.130 #48 NEW cov: 12404 ft: 15190 corp: 34/192b lim: 10 exec/s: 48 rss: 75Mb L: 6/10 MS: 1 ChangeBit- 00:06:49.130 [2024-11-20 07:02:53.448378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ac0 cdw11:00000000 00:06:49.130 [2024-11-20 07:02:53.448403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.130 #49 NEW cov: 12404 ft: 15201 corp: 35/195b lim: 10 exec/s: 49 rss: 75Mb L: 3/10 MS: 1 ChangeBit- 00:06:49.130 [2024-11-20 07:02:53.508789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:49.130 [2024-11-20 07:02:53.508814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.130 [2024-11-20 07:02:53.508882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000042 cdw11:00000000 00:06:49.130 [2024-11-20 07:02:53.508896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.130 [2024-11-20 07:02:53.508951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:49.130 [2024-11-20 07:02:53.508964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.130 #50 NEW cov: 12404 ft: 15223 corp: 36/201b lim: 10 exec/s: 25 rss: 75Mb L: 6/10 MS: 1 ChangeBinInt- 00:06:49.130 #50 DONE cov: 12404 ft: 15223 corp: 36/201b lim: 10 exec/s: 25 rss: 75Mb 00:06:49.130 ###### Recommended dictionary. ###### 00:06:49.130 "G\000\000\000\000\000\000\000" # Uses: 1 00:06:49.130 ###### End of recommended dictionary. ###### 00:06:49.130 Done 50 runs in 2 second(s) 00:06:49.130 07:02:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:06:49.130 07:02:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:49.130 07:02:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:49.130 07:02:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:06:49.130 07:02:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:06:49.130 07:02:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:49.130 07:02:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:49.130 07:02:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:49.130 07:02:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:06:49.130 07:02:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:49.130 07:02:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:49.130 07:02:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:06:49.130 07:02:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:06:49.130 07:02:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:49.130 07:02:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:06:49.130 07:02:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:49.130 07:02:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:49.130 07:02:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:49.130 07:02:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:06:49.389 [2024-11-20 07:02:53.704200] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:06:49.389 [2024-11-20 07:02:53.704271] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3777981 ] 00:06:49.648 [2024-11-20 07:02:53.966043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.648 [2024-11-20 07:02:54.022799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.648 [2024-11-20 07:02:54.082110] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:49.648 [2024-11-20 07:02:54.098443] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:06:49.648 INFO: Running with entropic power schedule (0xFF, 100). 00:06:49.648 INFO: Seed: 2548874941 00:06:49.648 INFO: Loaded 1 modules (388174 inline 8-bit counters): 388174 [0x2c4d34c, 0x2cabf9a), 00:06:49.648 INFO: Loaded 1 PC tables (388174 PCs): 388174 [0x2cabfa0,0x3298480), 00:06:49.648 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:49.648 INFO: A corpus is not provided, starting from an empty corpus 00:06:49.648 [2024-11-20 07:02:54.165434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.648 [2024-11-20 07:02:54.165469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.648 #2 INITED cov: 12202 ft: 12204 corp: 1/1b exec/s: 0 rss: 71Mb 00:06:49.907 [2024-11-20 07:02:54.215563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.907 [2024-11-20 07:02:54.215592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.907 #3 NEW cov: 12318 ft: 12772 corp: 2/2b lim: 5 exec/s: 0 rss: 72Mb L: 1/1 MS: 1 CopyPart- 00:06:49.907 [2024-11-20 07:02:54.285861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.907 [2024-11-20 07:02:54.285889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.907 #4 NEW cov: 12324 ft: 13118 corp: 3/3b lim: 5 exec/s: 0 rss: 72Mb L: 1/1 MS: 1 ChangeByte- 00:06:49.907 [2024-11-20 07:02:54.356258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.907 [2024-11-20 07:02:54.356285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.907 #5 NEW cov: 12409 ft: 13295 corp: 4/4b lim: 5 exec/s: 0 rss: 72Mb L: 1/1 MS: 1 ChangeByte- 00:06:49.907 [2024-11-20 07:02:54.426313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.907 [2024-11-20 07:02:54.426341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.165 #6 NEW cov: 12409 ft: 13338 corp: 5/5b lim: 5 exec/s: 0 rss: 72Mb L: 1/1 MS: 1 ChangeBinInt- 00:06:50.165 [2024-11-20 07:02:54.496526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.165 [2024-11-20 07:02:54.496555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.165 #7 NEW cov: 12409 ft: 13481 corp: 6/6b lim: 5 exec/s: 0 rss: 72Mb L: 1/1 MS: 1 ChangeByte- 00:06:50.165 [2024-11-20 07:02:54.566881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.165 [2024-11-20 07:02:54.566912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.165 #8 NEW cov: 12409 ft: 13565 corp: 7/7b lim: 5 exec/s: 0 rss: 72Mb L: 1/1 MS: 1 ChangeBit- 00:06:50.165 [2024-11-20 07:02:54.618134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.166 [2024-11-20 07:02:54.618160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.166 [2024-11-20 07:02:54.618229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.166 [2024-11-20 07:02:54.618244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.166 [2024-11-20 07:02:54.618315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.166 [2024-11-20 07:02:54.618329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.166 [2024-11-20 07:02:54.618401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.166 [2024-11-20 07:02:54.618415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.166 #9 NEW cov: 12409 ft: 14433 corp: 8/11b lim: 5 exec/s: 0 rss: 72Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:06:50.166 [2024-11-20 07:02:54.667225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.166 [2024-11-20 07:02:54.667252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.166 #10 NEW cov: 12409 ft: 14509 corp: 9/12b lim: 5 exec/s: 0 rss: 72Mb L: 1/4 MS: 1 ChangeBinInt- 00:06:50.166 [2024-11-20 07:02:54.718614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.166 [2024-11-20 07:02:54.718640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.166 [2024-11-20 07:02:54.718712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.166 [2024-11-20 07:02:54.718727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.166 [2024-11-20 07:02:54.718800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.166 [2024-11-20 07:02:54.718815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.166 [2024-11-20 07:02:54.718886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.166 [2024-11-20 07:02:54.718899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.425 #11 NEW cov: 12409 ft: 14567 corp: 10/16b lim: 5 exec/s: 0 rss: 72Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:06:50.425 [2024-11-20 07:02:54.768691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.425 [2024-11-20 07:02:54.768716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.425 [2024-11-20 07:02:54.768802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.425 [2024-11-20 07:02:54.768816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.425 [2024-11-20 07:02:54.768883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.425 [2024-11-20 07:02:54.768896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.425 [2024-11-20 07:02:54.768968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.425 [2024-11-20 07:02:54.768981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.425 #12 NEW cov: 12409 ft: 14616 corp: 11/20b lim: 5 exec/s: 0 rss: 72Mb L: 4/4 MS: 1 CrossOver- 00:06:50.425 [2024-11-20 07:02:54.837976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.425 [2024-11-20 07:02:54.838003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.425 #13 NEW cov: 12409 ft: 14658 corp: 12/21b lim: 5 exec/s: 0 rss: 72Mb L: 1/4 MS: 1 ChangeBit- 00:06:50.425 [2024-11-20 07:02:54.888217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.425 [2024-11-20 07:02:54.888243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.425 #14 NEW cov: 12409 ft: 14669 corp: 13/22b lim: 5 exec/s: 0 rss: 72Mb L: 1/4 MS: 1 ChangeByte- 00:06:50.425 [2024-11-20 07:02:54.938342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.425 [2024-11-20 07:02:54.938371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.425 #15 NEW cov: 12409 ft: 14714 corp: 14/23b lim: 5 exec/s: 0 rss: 72Mb L: 1/4 MS: 1 ShuffleBytes- 00:06:50.683 [2024-11-20 07:02:54.988662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.683 [2024-11-20 07:02:54.988689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.683 #16 NEW cov: 12409 ft: 14752 corp: 15/24b lim: 5 exec/s: 0 rss: 72Mb L: 1/4 MS: 1 ChangeByte- 00:06:50.683 [2024-11-20 07:02:55.038786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.683 [2024-11-20 07:02:55.038813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.942 NEW_FUNC[1/1]: 0x1c34f28 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:50.942 #17 NEW cov: 12432 ft: 14840 corp: 16/25b lim: 5 exec/s: 17 rss: 74Mb L: 1/4 MS: 1 ChangeBit- 00:06:50.942 [2024-11-20 07:02:55.369709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.942 [2024-11-20 07:02:55.369752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.942 [2024-11-20 07:02:55.369910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.942 [2024-11-20 07:02:55.369936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.942 #18 NEW cov: 12432 ft: 15338 corp: 17/27b lim: 5 exec/s: 18 rss: 74Mb L: 2/4 MS: 1 CrossOver- 00:06:50.942 [2024-11-20 07:02:55.419261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.942 [2024-11-20 07:02:55.419290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.942 #19 NEW cov: 12432 ft: 15383 corp: 18/28b lim: 5 exec/s: 19 rss: 74Mb L: 1/4 MS: 1 ShuffleBytes- 00:06:50.942 [2024-11-20 07:02:55.469810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.942 [2024-11-20 07:02:55.469840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.942 [2024-11-20 07:02:55.469983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.942 [2024-11-20 07:02:55.469998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.201 #20 NEW cov: 12432 ft: 15402 corp: 19/30b lim: 5 exec/s: 20 rss: 74Mb L: 2/4 MS: 1 InsertByte- 00:06:51.201 [2024-11-20 07:02:55.540784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.201 [2024-11-20 07:02:55.540813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.201 [2024-11-20 07:02:55.540962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.201 [2024-11-20 07:02:55.540981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.201 [2024-11-20 07:02:55.541131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.201 [2024-11-20 07:02:55.541151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.201 [2024-11-20 07:02:55.541289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.201 [2024-11-20 07:02:55.541306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.201 #21 NEW cov: 12432 ft: 15456 corp: 20/34b lim: 5 exec/s: 21 rss: 74Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:06:51.201 [2024-11-20 07:02:55.590285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.201 [2024-11-20 07:02:55.590313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.201 [2024-11-20 07:02:55.590451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.201 [2024-11-20 07:02:55.590467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.201 #22 NEW cov: 12432 ft: 15507 corp: 21/36b lim: 5 exec/s: 22 rss: 74Mb L: 2/4 MS: 1 EraseBytes- 00:06:51.201 [2024-11-20 07:02:55.660150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.201 [2024-11-20 07:02:55.660179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.201 #23 NEW cov: 12432 ft: 15515 corp: 22/37b lim: 5 exec/s: 23 rss: 74Mb L: 1/4 MS: 1 ShuffleBytes- 00:06:51.201 [2024-11-20 07:02:55.710957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.201 [2024-11-20 07:02:55.710988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.201 [2024-11-20 07:02:55.711127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.201 [2024-11-20 07:02:55.711144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.201 [2024-11-20 07:02:55.711285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.201 [2024-11-20 07:02:55.711302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.201 #24 NEW cov: 12432 ft: 15680 corp: 23/40b lim: 5 exec/s: 24 rss: 74Mb L: 3/4 MS: 1 CrossOver- 00:06:51.460 [2024-11-20 07:02:55.780791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.460 [2024-11-20 07:02:55.780821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.460 [2024-11-20 07:02:55.780951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.460 [2024-11-20 07:02:55.780971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.460 #25 NEW cov: 12432 ft: 15720 corp: 24/42b lim: 5 exec/s: 25 rss: 74Mb L: 2/4 MS: 1 InsertByte- 00:06:51.460 [2024-11-20 07:02:55.851894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.460 [2024-11-20 07:02:55.851923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.460 [2024-11-20 07:02:55.852067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.460 [2024-11-20 07:02:55.852086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.460 [2024-11-20 07:02:55.852229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.460 [2024-11-20 07:02:55.852247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.460 [2024-11-20 07:02:55.852385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.460 [2024-11-20 07:02:55.852403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.460 #26 NEW cov: 12432 ft: 15745 corp: 25/46b lim: 5 exec/s: 26 rss: 74Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:06:51.460 [2024-11-20 07:02:55.900958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.460 [2024-11-20 07:02:55.900986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.460 #27 NEW cov: 12432 ft: 15814 corp: 26/47b lim: 5 exec/s: 27 rss: 74Mb L: 1/4 MS: 1 EraseBytes- 00:06:51.460 [2024-11-20 07:02:55.971157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.460 [2024-11-20 07:02:55.971186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.460 #28 NEW cov: 12432 ft: 15848 corp: 27/48b lim: 5 exec/s: 28 rss: 74Mb L: 1/4 MS: 1 ChangeBit- 00:06:51.719 [2024-11-20 07:02:56.041779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.719 [2024-11-20 07:02:56.041807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.719 [2024-11-20 07:02:56.041953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.719 [2024-11-20 07:02:56.041972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.719 #29 NEW cov: 12432 ft: 15855 corp: 28/50b lim: 5 exec/s: 29 rss: 74Mb L: 2/4 MS: 1 ChangeBinInt- 00:06:51.719 [2024-11-20 07:02:56.111939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.719 [2024-11-20 07:02:56.111966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.719 [2024-11-20 07:02:56.112108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.719 [2024-11-20 07:02:56.112125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.719 #30 NEW cov: 12432 ft: 15881 corp: 29/52b lim: 5 exec/s: 30 rss: 74Mb L: 2/4 MS: 1 CopyPart- 00:06:51.719 [2024-11-20 07:02:56.163063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.719 [2024-11-20 07:02:56.163091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.719 [2024-11-20 07:02:56.163233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.719 [2024-11-20 07:02:56.163251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.719 [2024-11-20 07:02:56.163387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.719 [2024-11-20 07:02:56.163405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.719 [2024-11-20 07:02:56.163548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.719 [2024-11-20 07:02:56.163565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.719 [2024-11-20 07:02:56.163704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.719 [2024-11-20 07:02:56.163722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:51.719 #31 NEW cov: 12432 ft: 16005 corp: 30/57b lim: 5 exec/s: 15 rss: 75Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:51.719 #31 DONE cov: 12432 ft: 16005 corp: 30/57b lim: 5 exec/s: 15 rss: 75Mb 00:06:51.719 Done 31 runs in 2 second(s) 00:06:51.978 07:02:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:06:51.978 07:02:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:51.978 07:02:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:51.978 07:02:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:06:51.978 07:02:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:06:51.978 07:02:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:51.978 07:02:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:51.978 07:02:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:51.978 07:02:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:06:51.978 07:02:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:51.978 07:02:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:51.978 07:02:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:06:51.978 07:02:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:06:51.978 07:02:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:51.978 07:02:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:06:51.978 07:02:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:51.978 07:02:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:51.978 07:02:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:51.978 07:02:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:06:51.978 [2024-11-20 07:02:56.357780] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:06:51.978 [2024-11-20 07:02:56.357844] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3778510 ] 00:06:52.237 [2024-11-20 07:02:56.613661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.237 [2024-11-20 07:02:56.673697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.237 [2024-11-20 07:02:56.732753] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:52.237 [2024-11-20 07:02:56.749087] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:06:52.237 INFO: Running with entropic power schedule (0xFF, 100). 00:06:52.237 INFO: Seed: 903904252 00:06:52.237 INFO: Loaded 1 modules (388174 inline 8-bit counters): 388174 [0x2c4d34c, 0x2cabf9a), 00:06:52.237 INFO: Loaded 1 PC tables (388174 PCs): 388174 [0x2cabfa0,0x3298480), 00:06:52.237 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:52.237 INFO: A corpus is not provided, starting from an empty corpus 00:06:52.495 [2024-11-20 07:02:56.794438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.495 [2024-11-20 07:02:56.794468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.495 #2 INITED cov: 12205 ft: 12196 corp: 1/1b exec/s: 0 rss: 71Mb 00:06:52.495 [2024-11-20 07:02:56.834610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.495 [2024-11-20 07:02:56.834640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.495 [2024-11-20 07:02:56.834712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.495 [2024-11-20 07:02:56.834726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.495 #3 NEW cov: 12318 ft: 13282 corp: 2/3b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 InsertByte- 00:06:52.495 [2024-11-20 07:02:56.894602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.495 [2024-11-20 07:02:56.894628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.495 #4 NEW cov: 12324 ft: 13480 corp: 3/4b lim: 5 exec/s: 0 rss: 72Mb L: 1/2 MS: 1 ShuffleBytes- 00:06:52.495 [2024-11-20 07:02:56.934693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.495 [2024-11-20 07:02:56.934718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.495 #5 NEW cov: 12409 ft: 13857 corp: 4/5b lim: 5 exec/s: 0 rss: 72Mb L: 1/2 MS: 1 ShuffleBytes- 00:06:52.496 [2024-11-20 07:02:56.994854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.496 [2024-11-20 07:02:56.994880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.496 #6 NEW cov: 12409 ft: 14039 corp: 5/6b lim: 5 exec/s: 0 rss: 72Mb L: 1/2 MS: 1 CopyPart- 00:06:52.496 [2024-11-20 07:02:57.035006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.496 [2024-11-20 07:02:57.035032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.754 #7 NEW cov: 12409 ft: 14082 corp: 6/7b lim: 5 exec/s: 0 rss: 72Mb L: 1/2 MS: 1 ChangeBit- 00:06:52.754 [2024-11-20 07:02:57.095313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.754 [2024-11-20 07:02:57.095339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.754 [2024-11-20 07:02:57.095409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.754 [2024-11-20 07:02:57.095424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.754 #8 NEW cov: 12409 ft: 14133 corp: 7/9b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 ChangeBit- 00:06:52.754 [2024-11-20 07:02:57.155624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.754 [2024-11-20 07:02:57.155649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.754 [2024-11-20 07:02:57.155705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.754 [2024-11-20 07:02:57.155720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.754 [2024-11-20 07:02:57.155790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.755 [2024-11-20 07:02:57.155808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.755 #9 NEW cov: 12409 ft: 14396 corp: 8/12b lim: 5 exec/s: 0 rss: 72Mb L: 3/3 MS: 1 CrossOver- 00:06:52.755 [2024-11-20 07:02:57.215635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.755 [2024-11-20 07:02:57.215661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.755 [2024-11-20 07:02:57.215732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.755 [2024-11-20 07:02:57.215746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.755 #10 NEW cov: 12409 ft: 14445 corp: 9/14b lim: 5 exec/s: 0 rss: 72Mb L: 2/3 MS: 1 ChangeBit- 00:06:52.755 [2024-11-20 07:02:57.255625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.755 [2024-11-20 07:02:57.255651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.755 #11 NEW cov: 12409 ft: 14528 corp: 10/15b lim: 5 exec/s: 0 rss: 72Mb L: 1/3 MS: 1 ChangeByte- 00:06:52.755 [2024-11-20 07:02:57.296336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.755 [2024-11-20 07:02:57.296362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.755 [2024-11-20 07:02:57.296415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.755 [2024-11-20 07:02:57.296429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.755 [2024-11-20 07:02:57.296482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.755 [2024-11-20 07:02:57.296495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.755 [2024-11-20 07:02:57.296548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.755 [2024-11-20 07:02:57.296561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.755 [2024-11-20 07:02:57.296617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.755 [2024-11-20 07:02:57.296646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:53.016 #12 NEW cov: 12409 ft: 14875 corp: 11/20b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 CrossOver- 00:06:53.016 [2024-11-20 07:02:57.355877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.016 [2024-11-20 07:02:57.355903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.016 #13 NEW cov: 12409 ft: 14879 corp: 12/21b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 CrossOver- 00:06:53.016 [2024-11-20 07:02:57.416031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.016 [2024-11-20 07:02:57.416057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.016 #14 NEW cov: 12409 ft: 14888 corp: 13/22b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 EraseBytes- 00:06:53.016 [2024-11-20 07:02:57.476214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.016 [2024-11-20 07:02:57.476239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.016 #15 NEW cov: 12409 ft: 14979 corp: 14/23b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeByte- 00:06:53.016 [2024-11-20 07:02:57.536688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.016 [2024-11-20 07:02:57.536714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.016 [2024-11-20 07:02:57.536771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.016 [2024-11-20 07:02:57.536786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.016 [2024-11-20 07:02:57.536839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.016 [2024-11-20 07:02:57.536853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.275 #16 NEW cov: 12409 ft: 14993 corp: 15/26b lim: 5 exec/s: 0 rss: 73Mb L: 3/5 MS: 1 EraseBytes- 00:06:53.275 [2024-11-20 07:02:57.596513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.275 [2024-11-20 07:02:57.596539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.275 [2024-11-20 07:02:57.636799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.275 [2024-11-20 07:02:57.636824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.275 [2024-11-20 07:02:57.636881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.275 [2024-11-20 07:02:57.636894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.275 #18 NEW cov: 12409 ft: 15009 corp: 16/28b lim: 5 exec/s: 0 rss: 73Mb L: 2/5 MS: 2 CopyPart-InsertByte- 00:06:53.275 [2024-11-20 07:02:57.676922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.275 [2024-11-20 07:02:57.676947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.275 [2024-11-20 07:02:57.677016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.275 [2024-11-20 07:02:57.677030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.534 NEW_FUNC[1/1]: 0x1c34f28 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:53.534 #19 NEW cov: 12432 ft: 15095 corp: 17/30b lim: 5 exec/s: 19 rss: 74Mb L: 2/5 MS: 1 InsertByte- 00:06:53.534 [2024-11-20 07:02:57.977804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.534 [2024-11-20 07:02:57.977836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.534 [2024-11-20 07:02:57.977913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.534 [2024-11-20 07:02:57.977927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.534 #20 NEW cov: 12432 ft: 15108 corp: 18/32b lim: 5 exec/s: 20 rss: 74Mb L: 2/5 MS: 1 CrossOver- 00:06:53.534 [2024-11-20 07:02:58.037899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.534 [2024-11-20 07:02:58.037926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.534 [2024-11-20 07:02:58.037996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.534 [2024-11-20 07:02:58.038010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.534 #21 NEW cov: 12432 ft: 15116 corp: 19/34b lim: 5 exec/s: 21 rss: 74Mb L: 2/5 MS: 1 CrossOver- 00:06:53.534 [2024-11-20 07:02:58.078125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.534 [2024-11-20 07:02:58.078150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.534 [2024-11-20 07:02:58.078222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.534 [2024-11-20 07:02:58.078237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.534 [2024-11-20 07:02:58.078297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.534 [2024-11-20 07:02:58.078310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.794 #22 NEW cov: 12432 ft: 15156 corp: 20/37b lim: 5 exec/s: 22 rss: 74Mb L: 3/5 MS: 1 ChangeByte- 00:06:53.794 [2024-11-20 07:02:58.118228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.794 [2024-11-20 07:02:58.118252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.794 [2024-11-20 07:02:58.118308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.794 [2024-11-20 07:02:58.118322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.794 [2024-11-20 07:02:58.118377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.794 [2024-11-20 07:02:58.118390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.794 #23 NEW cov: 12432 ft: 15191 corp: 21/40b lim: 5 exec/s: 23 rss: 74Mb L: 3/5 MS: 1 CrossOver- 00:06:53.794 [2024-11-20 07:02:58.178109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.794 [2024-11-20 07:02:58.178134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.794 #24 NEW cov: 12432 ft: 15227 corp: 22/41b lim: 5 exec/s: 24 rss: 74Mb L: 1/5 MS: 1 CopyPart- 00:06:53.794 [2024-11-20 07:02:58.218160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.794 [2024-11-20 07:02:58.218185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.794 [2024-11-20 07:02:58.258305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.794 [2024-11-20 07:02:58.258329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.794 #26 NEW cov: 12432 ft: 15250 corp: 23/42b lim: 5 exec/s: 26 rss: 74Mb L: 1/5 MS: 2 EraseBytes-CopyPart- 00:06:53.794 [2024-11-20 07:02:58.298744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.794 [2024-11-20 07:02:58.298770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.794 [2024-11-20 07:02:58.298828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.794 [2024-11-20 07:02:58.298843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.794 [2024-11-20 07:02:58.298898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.794 [2024-11-20 07:02:58.298911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.794 #27 NEW cov: 12432 ft: 15301 corp: 24/45b lim: 5 exec/s: 27 rss: 74Mb L: 3/5 MS: 1 CrossOver- 00:06:53.794 [2024-11-20 07:02:58.338710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.794 [2024-11-20 07:02:58.338735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.794 [2024-11-20 07:02:58.338791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.794 [2024-11-20 07:02:58.338805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.053 #28 NEW cov: 12432 ft: 15335 corp: 25/47b lim: 5 exec/s: 28 rss: 74Mb L: 2/5 MS: 1 InsertByte- 00:06:54.053 [2024-11-20 07:02:58.378624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.053 [2024-11-20 07:02:58.378649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.053 #29 NEW cov: 12432 ft: 15347 corp: 26/48b lim: 5 exec/s: 29 rss: 74Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:54.053 [2024-11-20 07:02:58.418767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.053 [2024-11-20 07:02:58.418791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.053 #30 NEW cov: 12432 ft: 15355 corp: 27/49b lim: 5 exec/s: 30 rss: 74Mb L: 1/5 MS: 1 CrossOver- 00:06:54.053 [2024-11-20 07:02:58.478933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.053 [2024-11-20 07:02:58.478958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.053 #31 NEW cov: 12432 ft: 15369 corp: 28/50b lim: 5 exec/s: 31 rss: 75Mb L: 1/5 MS: 1 ChangeBit- 00:06:54.053 [2024-11-20 07:02:58.539785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.053 [2024-11-20 07:02:58.539810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.053 [2024-11-20 07:02:58.539865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.053 [2024-11-20 07:02:58.539879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.053 [2024-11-20 07:02:58.539932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.053 [2024-11-20 07:02:58.539961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.053 [2024-11-20 07:02:58.540017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.053 [2024-11-20 07:02:58.540031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.053 [2024-11-20 07:02:58.540087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.053 [2024-11-20 07:02:58.540101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:54.053 #32 NEW cov: 12432 ft: 15390 corp: 29/55b lim: 5 exec/s: 32 rss: 75Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:54.053 [2024-11-20 07:02:58.599306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.053 [2024-11-20 07:02:58.599332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.312 #33 NEW cov: 12432 ft: 15401 corp: 30/56b lim: 5 exec/s: 33 rss: 75Mb L: 1/5 MS: 1 ChangeBit- 00:06:54.312 [2024-11-20 07:02:58.659971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.312 [2024-11-20 07:02:58.659998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.312 [2024-11-20 07:02:58.660057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.312 [2024-11-20 07:02:58.660072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.312 [2024-11-20 07:02:58.660128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.312 [2024-11-20 07:02:58.660142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.312 [2024-11-20 07:02:58.660197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.312 [2024-11-20 07:02:58.660211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.312 #34 NEW cov: 12432 ft: 15414 corp: 31/60b lim: 5 exec/s: 34 rss: 75Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:06:54.312 [2024-11-20 07:02:58.719615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.312 [2024-11-20 07:02:58.719641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.312 #35 NEW cov: 12432 ft: 15421 corp: 32/61b lim: 5 exec/s: 35 rss: 75Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:54.312 [2024-11-20 07:02:58.759715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.312 [2024-11-20 07:02:58.759741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.312 #36 NEW cov: 12432 ft: 15423 corp: 33/62b lim: 5 exec/s: 18 rss: 75Mb L: 1/5 MS: 1 ChangeByte- 00:06:54.312 #36 DONE cov: 12432 ft: 15423 corp: 33/62b lim: 5 exec/s: 18 rss: 75Mb 00:06:54.312 Done 36 runs in 2 second(s) 00:06:54.571 07:02:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:06:54.571 07:02:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:54.571 07:02:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:54.571 07:02:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:06:54.571 07:02:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:06:54.571 07:02:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:54.571 07:02:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:54.571 07:02:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:54.571 07:02:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:06:54.571 07:02:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:54.571 07:02:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:54.571 07:02:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:06:54.571 07:02:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:06:54.571 07:02:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:54.571 07:02:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:06:54.571 07:02:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:54.572 07:02:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:54.572 07:02:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:54.572 07:02:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:06:54.572 [2024-11-20 07:02:58.954007] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:06:54.572 [2024-11-20 07:02:58.954076] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3778968 ] 00:06:54.830 [2024-11-20 07:02:59.216432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.830 [2024-11-20 07:02:59.276519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.830 [2024-11-20 07:02:59.335663] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:54.830 [2024-11-20 07:02:59.352003] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:06:54.830 INFO: Running with entropic power schedule (0xFF, 100). 00:06:54.830 INFO: Seed: 3505915907 00:06:55.089 INFO: Loaded 1 modules (388174 inline 8-bit counters): 388174 [0x2c4d34c, 0x2cabf9a), 00:06:55.089 INFO: Loaded 1 PC tables (388174 PCs): 388174 [0x2cabfa0,0x3298480), 00:06:55.089 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:55.089 INFO: A corpus is not provided, starting from an empty corpus 00:06:55.089 #2 INITED exec/s: 0 rss: 65Mb 00:06:55.089 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:55.089 This may also happen if the target rejected all inputs we tried so far 00:06:55.089 [2024-11-20 07:02:59.411594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.089 [2024-11-20 07:02:59.411642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.089 [2024-11-20 07:02:59.411704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.089 [2024-11-20 07:02:59.411719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.089 [2024-11-20 07:02:59.411775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.089 [2024-11-20 07:02:59.411788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.089 [2024-11-20 07:02:59.411846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.089 [2024-11-20 07:02:59.411859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.347 NEW_FUNC[1/715]: 0x448a88 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:06:55.347 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:55.347 #4 NEW cov: 12228 ft: 12219 corp: 2/34b lim: 40 exec/s: 0 rss: 73Mb L: 33/33 MS: 2 CrossOver-InsertRepeatedBytes- 00:06:55.347 [2024-11-20 07:02:59.752564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.347 [2024-11-20 07:02:59.752610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.348 [2024-11-20 07:02:59.752691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.348 [2024-11-20 07:02:59.752709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.348 [2024-11-20 07:02:59.752777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.348 [2024-11-20 07:02:59.752795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.348 [2024-11-20 07:02:59.752862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.348 [2024-11-20 07:02:59.752880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.348 #5 NEW cov: 12341 ft: 12767 corp: 3/68b lim: 40 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 CrossOver- 00:06:55.348 [2024-11-20 07:02:59.812629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.348 [2024-11-20 07:02:59.812657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.348 [2024-11-20 07:02:59.812716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.348 [2024-11-20 07:02:59.812733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.348 [2024-11-20 07:02:59.812791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.348 [2024-11-20 07:02:59.812804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.348 [2024-11-20 07:02:59.812862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.348 [2024-11-20 07:02:59.812875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.348 #6 NEW cov: 12347 ft: 12928 corp: 4/102b lim: 40 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 ShuffleBytes- 00:06:55.348 [2024-11-20 07:02:59.872740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0affff cdw11:ffffff29 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.348 [2024-11-20 07:02:59.872769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.348 [2024-11-20 07:02:59.872846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.348 [2024-11-20 07:02:59.872859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.348 [2024-11-20 07:02:59.872918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.348 [2024-11-20 07:02:59.872931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.348 [2024-11-20 07:02:59.872990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.348 [2024-11-20 07:02:59.873004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.348 #7 NEW cov: 12432 ft: 13344 corp: 5/136b lim: 40 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 InsertByte- 00:06:55.607 [2024-11-20 07:02:59.912835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.607 [2024-11-20 07:02:59.912861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.607 [2024-11-20 07:02:59.912935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.607 [2024-11-20 07:02:59.912949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.607 [2024-11-20 07:02:59.913008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffa8ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.607 [2024-11-20 07:02:59.913022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.607 [2024-11-20 07:02:59.913079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.607 [2024-11-20 07:02:59.913092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.607 #8 NEW cov: 12432 ft: 13477 corp: 6/170b lim: 40 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 ChangeByte- 00:06:55.607 [2024-11-20 07:02:59.952904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.607 [2024-11-20 07:02:59.952930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.607 [2024-11-20 07:02:59.952992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.607 [2024-11-20 07:02:59.953006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.607 [2024-11-20 07:02:59.953066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.607 [2024-11-20 07:02:59.953080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.607 [2024-11-20 07:02:59.953137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.607 [2024-11-20 07:02:59.953149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.607 #9 NEW cov: 12432 ft: 13513 corp: 7/204b lim: 40 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 ShuffleBytes- 00:06:55.607 [2024-11-20 07:03:00.012831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.607 [2024-11-20 07:03:00.012858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.607 [2024-11-20 07:03:00.012919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.607 [2024-11-20 07:03:00.012934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.607 #10 NEW cov: 12432 ft: 14153 corp: 8/227b lim: 40 exec/s: 0 rss: 73Mb L: 23/34 MS: 1 EraseBytes- 00:06:55.607 [2024-11-20 07:03:00.052945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.607 [2024-11-20 07:03:00.052971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.607 [2024-11-20 07:03:00.053046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.607 [2024-11-20 07:03:00.053060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.607 #11 NEW cov: 12432 ft: 14190 corp: 9/245b lim: 40 exec/s: 0 rss: 73Mb L: 18/34 MS: 1 InsertRepeatedBytes- 00:06:55.607 [2024-11-20 07:03:00.093347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.607 [2024-11-20 07:03:00.093374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.607 [2024-11-20 07:03:00.093436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.607 [2024-11-20 07:03:00.093450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.607 [2024-11-20 07:03:00.093508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffa8ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.607 [2024-11-20 07:03:00.093522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.607 [2024-11-20 07:03:00.093584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.607 [2024-11-20 07:03:00.093602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.607 #12 NEW cov: 12432 ft: 14232 corp: 10/279b lim: 40 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 ChangeBinInt- 00:06:55.607 [2024-11-20 07:03:00.153207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.607 [2024-11-20 07:03:00.153235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.607 [2024-11-20 07:03:00.153294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.607 [2024-11-20 07:03:00.153308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.869 #13 NEW cov: 12432 ft: 14299 corp: 11/302b lim: 40 exec/s: 0 rss: 74Mb L: 23/34 MS: 1 CopyPart- 00:06:55.869 [2024-11-20 07:03:00.213255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1b1bffff cdw11:ffff2fff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.869 [2024-11-20 07:03:00.213282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.869 #18 NEW cov: 12432 ft: 14636 corp: 12/312b lim: 40 exec/s: 0 rss: 74Mb L: 10/34 MS: 5 ChangeByte-ShuffleBytes-CrossOver-InsertByte-CopyPart- 00:06:55.869 [2024-11-20 07:03:00.253708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a210000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.869 [2024-11-20 07:03:00.253734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.869 [2024-11-20 07:03:00.253809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.869 [2024-11-20 07:03:00.253824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.869 [2024-11-20 07:03:00.253881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.869 [2024-11-20 07:03:00.253895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.869 [2024-11-20 07:03:00.253955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.869 [2024-11-20 07:03:00.253970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.869 #19 NEW cov: 12432 ft: 14648 corp: 13/345b lim: 40 exec/s: 0 rss: 74Mb L: 33/34 MS: 1 ChangeBinInt- 00:06:55.869 [2024-11-20 07:03:00.293845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0affff cdw11:ffffff29 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.869 [2024-11-20 07:03:00.293871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.869 [2024-11-20 07:03:00.293948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.869 [2024-11-20 07:03:00.293961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.869 [2024-11-20 07:03:00.294019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffff3bff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.869 [2024-11-20 07:03:00.294037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.870 [2024-11-20 07:03:00.294097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.870 [2024-11-20 07:03:00.294111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.870 NEW_FUNC[1/1]: 0x1c34f28 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:55.870 #20 NEW cov: 12455 ft: 14691 corp: 14/379b lim: 40 exec/s: 0 rss: 74Mb L: 34/34 MS: 1 ChangeByte- 00:06:55.870 [2024-11-20 07:03:00.353781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a1700 cdw11:0000ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.870 [2024-11-20 07:03:00.353808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.870 [2024-11-20 07:03:00.353884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.870 [2024-11-20 07:03:00.353898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.870 #21 NEW cov: 12455 ft: 14756 corp: 15/402b lim: 40 exec/s: 0 rss: 74Mb L: 23/34 MS: 1 ChangeBinInt- 00:06:55.870 [2024-11-20 07:03:00.394135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.870 [2024-11-20 07:03:00.394161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.870 [2024-11-20 07:03:00.394241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.870 [2024-11-20 07:03:00.394255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.870 [2024-11-20 07:03:00.394314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffa8ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.870 [2024-11-20 07:03:00.394328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.870 [2024-11-20 07:03:00.394389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:fffff7ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.870 [2024-11-20 07:03:00.394403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.870 #22 NEW cov: 12455 ft: 14811 corp: 16/436b lim: 40 exec/s: 22 rss: 74Mb L: 34/34 MS: 1 ChangeBit- 00:06:56.215 [2024-11-20 07:03:00.433868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1b1bffff cdw11:ffff2fff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.215 [2024-11-20 07:03:00.433895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.215 #23 NEW cov: 12455 ft: 14858 corp: 17/446b lim: 40 exec/s: 23 rss: 74Mb L: 10/34 MS: 1 ChangeBinInt- 00:06:56.215 [2024-11-20 07:03:00.494425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0affff cdw11:ffffff29 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.215 [2024-11-20 07:03:00.494450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.215 [2024-11-20 07:03:00.494525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.215 [2024-11-20 07:03:00.494554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.215 [2024-11-20 07:03:00.494616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffff3bff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.215 [2024-11-20 07:03:00.494630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.215 [2024-11-20 07:03:00.494688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.215 [2024-11-20 07:03:00.494701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.215 #24 NEW cov: 12455 ft: 14901 corp: 18/481b lim: 40 exec/s: 24 rss: 74Mb L: 35/35 MS: 1 CrossOver- 00:06:56.215 [2024-11-20 07:03:00.554383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a1700 cdw11:0000ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.215 [2024-11-20 07:03:00.554409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.215 [2024-11-20 07:03:00.554470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffff0800 cdw11:0000ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.215 [2024-11-20 07:03:00.554484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.215 #25 NEW cov: 12455 ft: 14917 corp: 19/504b lim: 40 exec/s: 25 rss: 74Mb L: 23/35 MS: 1 ChangeBinInt- 00:06:56.215 [2024-11-20 07:03:00.614355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.215 [2024-11-20 07:03:00.614381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.215 #28 NEW cov: 12455 ft: 14932 corp: 20/515b lim: 40 exec/s: 28 rss: 74Mb L: 11/35 MS: 3 CrossOver-CrossOver-CopyPart- 00:06:56.215 [2024-11-20 07:03:00.654882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0affff cdw11:fffeff29 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.215 [2024-11-20 07:03:00.654908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.215 [2024-11-20 07:03:00.654969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.215 [2024-11-20 07:03:00.654983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.215 [2024-11-20 07:03:00.655042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffff3bff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.215 [2024-11-20 07:03:00.655056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.215 [2024-11-20 07:03:00.655116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.215 [2024-11-20 07:03:00.655129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.215 #29 NEW cov: 12455 ft: 14971 corp: 21/549b lim: 40 exec/s: 29 rss: 74Mb L: 34/35 MS: 1 ChangeBit- 00:06:56.215 [2024-11-20 07:03:00.695016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.215 [2024-11-20 07:03:00.695043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.215 [2024-11-20 07:03:00.695106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:5cffffff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.215 [2024-11-20 07:03:00.695120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.215 [2024-11-20 07:03:00.695182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffa8ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.215 [2024-11-20 07:03:00.695196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.215 [2024-11-20 07:03:00.695257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.215 [2024-11-20 07:03:00.695274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.215 #30 NEW cov: 12455 ft: 15007 corp: 22/583b lim: 40 exec/s: 30 rss: 74Mb L: 34/35 MS: 1 ChangeByte- 00:06:56.495 [2024-11-20 07:03:00.754901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0affff cdw11:fffffeff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.495 [2024-11-20 07:03:00.754927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.495 [2024-11-20 07:03:00.754989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.495 [2024-11-20 07:03:00.755004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.495 #31 NEW cov: 12455 ft: 15018 corp: 23/606b lim: 40 exec/s: 31 rss: 74Mb L: 23/35 MS: 1 CMP- DE: "\377\377\377\377\376\377\377\377"- 00:06:56.495 [2024-11-20 07:03:00.794985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.495 [2024-11-20 07:03:00.795011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.495 [2024-11-20 07:03:00.795073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.495 [2024-11-20 07:03:00.795087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.495 #32 NEW cov: 12455 ft: 15066 corp: 24/624b lim: 40 exec/s: 32 rss: 74Mb L: 18/35 MS: 1 EraseBytes- 00:06:56.495 [2024-11-20 07:03:00.834979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1b1bffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.495 [2024-11-20 07:03:00.835006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.495 #33 NEW cov: 12455 ft: 15097 corp: 25/633b lim: 40 exec/s: 33 rss: 74Mb L: 9/35 MS: 1 EraseBytes- 00:06:56.495 [2024-11-20 07:03:00.875095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1b1bffff cdw11:ffffff2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.495 [2024-11-20 07:03:00.875121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.495 #34 NEW cov: 12455 ft: 15172 corp: 26/643b lim: 40 exec/s: 34 rss: 74Mb L: 10/35 MS: 1 ShuffleBytes- 00:06:56.495 [2024-11-20 07:03:00.935526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0affff cdw11:ffffff29 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.495 [2024-11-20 07:03:00.935552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.495 [2024-11-20 07:03:00.935620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.495 [2024-11-20 07:03:00.935651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.495 [2024-11-20 07:03:00.935708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:3bff0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.495 [2024-11-20 07:03:00.935722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.495 #35 NEW cov: 12455 ft: 15365 corp: 27/668b lim: 40 exec/s: 35 rss: 74Mb L: 25/35 MS: 1 CrossOver- 00:06:56.495 [2024-11-20 07:03:00.975619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.495 [2024-11-20 07:03:00.975645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.495 [2024-11-20 07:03:00.975704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff2affff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.495 [2024-11-20 07:03:00.975717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.495 [2024-11-20 07:03:00.975777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.495 [2024-11-20 07:03:00.975790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.496 #36 NEW cov: 12455 ft: 15379 corp: 28/692b lim: 40 exec/s: 36 rss: 74Mb L: 24/35 MS: 1 InsertByte- 00:06:56.496 [2024-11-20 07:03:01.035844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0affff cdw11:fffeff29 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.496 [2024-11-20 07:03:01.035871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.496 [2024-11-20 07:03:01.035934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.496 [2024-11-20 07:03:01.035948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.496 [2024-11-20 07:03:01.036007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.496 [2024-11-20 07:03:01.036020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.755 #37 NEW cov: 12455 ft: 15386 corp: 29/723b lim: 40 exec/s: 37 rss: 75Mb L: 31/35 MS: 1 EraseBytes- 00:06:56.755 [2024-11-20 07:03:01.096157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.755 [2024-11-20 07:03:01.096184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.755 [2024-11-20 07:03:01.096246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.755 [2024-11-20 07:03:01.096260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.755 [2024-11-20 07:03:01.096319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.755 [2024-11-20 07:03:01.096333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.755 [2024-11-20 07:03:01.096397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffff09 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.755 [2024-11-20 07:03:01.096411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.755 #38 NEW cov: 12455 ft: 15397 corp: 30/756b lim: 40 exec/s: 38 rss: 75Mb L: 33/35 MS: 1 ChangeBinInt- 00:06:56.755 [2024-11-20 07:03:01.135826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.755 [2024-11-20 07:03:01.135852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.755 #39 NEW cov: 12455 ft: 15496 corp: 31/765b lim: 40 exec/s: 39 rss: 75Mb L: 9/35 MS: 1 CopyPart- 00:06:56.755 [2024-11-20 07:03:01.196327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0affff cdw11:ff626262 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.755 [2024-11-20 07:03:01.196353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.755 [2024-11-20 07:03:01.196412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:62626262 cdw11:62626262 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.755 [2024-11-20 07:03:01.196426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.755 [2024-11-20 07:03:01.196486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:62ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.755 [2024-11-20 07:03:01.196499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.755 [2024-11-20 07:03:01.196559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.755 [2024-11-20 07:03:01.196573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.755 #40 NEW cov: 12455 ft: 15511 corp: 32/800b lim: 40 exec/s: 40 rss: 75Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:06:56.755 [2024-11-20 07:03:01.236385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0affff cdw11:fffffeff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.755 [2024-11-20 07:03:01.236410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.755 [2024-11-20 07:03:01.236470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.755 [2024-11-20 07:03:01.236484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.755 [2024-11-20 07:03:01.236542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.755 [2024-11-20 07:03:01.236556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.755 #41 NEW cov: 12455 ft: 15551 corp: 33/828b lim: 40 exec/s: 41 rss: 75Mb L: 28/35 MS: 1 CrossOver- 00:06:56.755 [2024-11-20 07:03:01.296282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.755 [2024-11-20 07:03:01.296307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.015 #42 NEW cov: 12455 ft: 15565 corp: 34/838b lim: 40 exec/s: 42 rss: 75Mb L: 10/35 MS: 1 EraseBytes- 00:06:57.015 [2024-11-20 07:03:01.356861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1b1bffcc cdw11:cccccccc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.015 [2024-11-20 07:03:01.356891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.015 [2024-11-20 07:03:01.356967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:cccccccc cdw11:cccccccc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.015 [2024-11-20 07:03:01.356982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.015 [2024-11-20 07:03:01.357042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cccccccc cdw11:cccccccc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.015 [2024-11-20 07:03:01.357055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.015 [2024-11-20 07:03:01.357113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:cccccccc cdw11:ccccffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.015 [2024-11-20 07:03:01.357126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.015 #43 NEW cov: 12455 ft: 15577 corp: 35/875b lim: 40 exec/s: 21 rss: 75Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:06:57.015 #43 DONE cov: 12455 ft: 15577 corp: 35/875b lim: 40 exec/s: 21 rss: 75Mb 00:06:57.015 ###### Recommended dictionary. ###### 00:06:57.015 "\377\377\377\377\376\377\377\377" # Uses: 0 00:06:57.015 ###### End of recommended dictionary. ###### 00:06:57.015 Done 43 runs in 2 second(s) 00:06:57.015 07:03:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:06:57.015 07:03:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:57.015 07:03:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:57.015 07:03:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:06:57.015 07:03:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:06:57.015 07:03:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:57.015 07:03:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:57.015 07:03:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:57.015 07:03:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:06:57.015 07:03:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:57.015 07:03:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:57.015 07:03:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:06:57.015 07:03:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:06:57.015 07:03:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:57.015 07:03:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:06:57.015 07:03:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:57.015 07:03:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:57.015 07:03:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:57.015 07:03:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:06:57.015 [2024-11-20 07:03:01.548094] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:06:57.015 [2024-11-20 07:03:01.548180] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3779456 ] 00:06:57.274 [2024-11-20 07:03:01.733926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.274 [2024-11-20 07:03:01.767808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.274 [2024-11-20 07:03:01.827293] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:57.532 [2024-11-20 07:03:01.843640] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:06:57.532 INFO: Running with entropic power schedule (0xFF, 100). 00:06:57.532 INFO: Seed: 1702954073 00:06:57.532 INFO: Loaded 1 modules (388174 inline 8-bit counters): 388174 [0x2c4d34c, 0x2cabf9a), 00:06:57.532 INFO: Loaded 1 PC tables (388174 PCs): 388174 [0x2cabfa0,0x3298480), 00:06:57.532 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:57.532 INFO: A corpus is not provided, starting from an empty corpus 00:06:57.532 #2 INITED exec/s: 0 rss: 66Mb 00:06:57.532 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:57.532 This may also happen if the target rejected all inputs we tried so far 00:06:57.532 [2024-11-20 07:03:01.889133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.532 [2024-11-20 07:03:01.889162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.532 [2024-11-20 07:03:01.889236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:49494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.532 [2024-11-20 07:03:01.889250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.791 NEW_FUNC[1/716]: 0x44a7f8 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:06:57.791 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:57.791 #11 NEW cov: 12240 ft: 12239 corp: 2/22b lim: 40 exec/s: 0 rss: 73Mb L: 21/21 MS: 4 CMP-CopyPart-CopyPart-InsertRepeatedBytes- DE: " \000\000\000"- 00:06:57.791 [2024-11-20 07:03:02.229887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:49797979 cdw11:79797979 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.791 [2024-11-20 07:03:02.229918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.791 #15 NEW cov: 12353 ft: 13548 corp: 3/31b lim: 40 exec/s: 0 rss: 73Mb L: 9/21 MS: 4 InsertByte-ChangeBit-CrossOver-InsertRepeatedBytes- 00:06:57.791 [2024-11-20 07:03:02.269919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:49797979 cdw11:7979793b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.791 [2024-11-20 07:03:02.269945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.791 #16 NEW cov: 12359 ft: 13776 corp: 4/39b lim: 40 exec/s: 0 rss: 73Mb L: 8/21 MS: 1 EraseBytes- 00:06:57.791 [2024-11-20 07:03:02.330217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:260a4949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.791 [2024-11-20 07:03:02.330242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.791 [2024-11-20 07:03:02.330300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0a494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.791 [2024-11-20 07:03:02.330313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.049 #23 NEW cov: 12444 ft: 14071 corp: 5/62b lim: 40 exec/s: 0 rss: 73Mb L: 23/23 MS: 2 InsertByte-CrossOver- 00:06:58.050 [2024-11-20 07:03:02.370208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a260a49 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.050 [2024-11-20 07:03:02.370233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.050 #24 NEW cov: 12444 ft: 14138 corp: 6/76b lim: 40 exec/s: 0 rss: 73Mb L: 14/23 MS: 1 CrossOver- 00:06:58.050 [2024-11-20 07:03:02.410280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:49797979 cdw11:79797979 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.050 [2024-11-20 07:03:02.410304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.050 #25 NEW cov: 12444 ft: 14244 corp: 7/86b lim: 40 exec/s: 0 rss: 73Mb L: 10/23 MS: 1 CrossOver- 00:06:58.050 [2024-11-20 07:03:02.470591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:49797979 cdw11:7979790a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.050 [2024-11-20 07:03:02.470622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.050 [2024-11-20 07:03:02.470699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:260a4949 cdw11:4949493b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.050 [2024-11-20 07:03:02.470714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.050 #26 NEW cov: 12444 ft: 14295 corp: 8/108b lim: 40 exec/s: 0 rss: 73Mb L: 22/23 MS: 1 CrossOver- 00:06:58.050 [2024-11-20 07:03:02.510727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:49790a26 cdw11:0a494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.050 [2024-11-20 07:03:02.510752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.050 [2024-11-20 07:03:02.510811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:49494979 cdw11:79797979 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.050 [2024-11-20 07:03:02.510824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.050 #27 NEW cov: 12444 ft: 14373 corp: 9/130b lim: 40 exec/s: 0 rss: 73Mb L: 22/23 MS: 1 CrossOver- 00:06:58.050 [2024-11-20 07:03:02.550905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:49790a26 cdw11:0a094949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.050 [2024-11-20 07:03:02.550931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.050 [2024-11-20 07:03:02.550988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:49494979 cdw11:79797979 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.050 [2024-11-20 07:03:02.551002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.050 #28 NEW cov: 12444 ft: 14406 corp: 10/152b lim: 40 exec/s: 0 rss: 74Mb L: 22/23 MS: 1 ChangeBit- 00:06:58.308 [2024-11-20 07:03:02.611195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:20000000 cdw11:1ae3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.308 [2024-11-20 07:03:02.611220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.308 [2024-11-20 07:03:02.611295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.308 [2024-11-20 07:03:02.611308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.308 [2024-11-20 07:03:02.611365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.309 [2024-11-20 07:03:02.611381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.309 #31 NEW cov: 12444 ft: 14656 corp: 11/183b lim: 40 exec/s: 0 rss: 74Mb L: 31/31 MS: 3 ChangeBit-PersAutoDict-InsertRepeatedBytes- DE: " \000\000\000"- 00:06:58.309 [2024-11-20 07:03:02.651317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:260a4949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.309 [2024-11-20 07:03:02.651341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.309 [2024-11-20 07:03:02.651400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0a494920 cdw11:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.309 [2024-11-20 07:03:02.651413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.309 [2024-11-20 07:03:02.651470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:49494949 cdw11:49490000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.309 [2024-11-20 07:03:02.651484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.309 #37 NEW cov: 12444 ft: 14673 corp: 12/210b lim: 40 exec/s: 0 rss: 74Mb L: 27/31 MS: 1 PersAutoDict- DE: " \000\000\000"- 00:06:58.309 [2024-11-20 07:03:02.711141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a260849 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.309 [2024-11-20 07:03:02.711166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.309 #38 NEW cov: 12444 ft: 14710 corp: 13/224b lim: 40 exec/s: 0 rss: 74Mb L: 14/31 MS: 1 ChangeBit- 00:06:58.309 [2024-11-20 07:03:02.771485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:49790a26 cdw11:0a494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.309 [2024-11-20 07:03:02.771510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.309 [2024-11-20 07:03:02.771584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:49494979 cdw11:79797949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.309 [2024-11-20 07:03:02.771604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.309 NEW_FUNC[1/1]: 0x1c34f28 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:58.309 #39 NEW cov: 12467 ft: 14755 corp: 14/242b lim: 40 exec/s: 0 rss: 74Mb L: 18/31 MS: 1 EraseBytes- 00:06:58.309 [2024-11-20 07:03:02.811959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:49790a26 cdw11:0a494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.309 [2024-11-20 07:03:02.811984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.309 [2024-11-20 07:03:02.812055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:49494979 cdw11:79797949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.309 [2024-11-20 07:03:02.812069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.309 [2024-11-20 07:03:02.812128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:260a4949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.309 [2024-11-20 07:03:02.812141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.309 [2024-11-20 07:03:02.812197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:79797979 cdw11:493b493b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.309 [2024-11-20 07:03:02.812213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.309 #40 NEW cov: 12467 ft: 15088 corp: 15/275b lim: 40 exec/s: 0 rss: 74Mb L: 33/33 MS: 1 CopyPart- 00:06:58.568 [2024-11-20 07:03:02.871849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a494949 cdw11:49484949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.568 [2024-11-20 07:03:02.871874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.568 [2024-11-20 07:03:02.871948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:49494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.568 [2024-11-20 07:03:02.871961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.568 #41 NEW cov: 12467 ft: 15107 corp: 16/296b lim: 40 exec/s: 41 rss: 74Mb L: 21/33 MS: 1 ChangeBit- 00:06:58.568 [2024-11-20 07:03:02.932153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:20000000 cdw11:1ae3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.568 [2024-11-20 07:03:02.932179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.568 [2024-11-20 07:03:02.932239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.568 [2024-11-20 07:03:02.932253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.568 [2024-11-20 07:03:02.932311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.568 [2024-11-20 07:03:02.932324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.568 #42 NEW cov: 12467 ft: 15172 corp: 17/327b lim: 40 exec/s: 42 rss: 74Mb L: 31/33 MS: 1 ChangeByte- 00:06:58.568 [2024-11-20 07:03:02.992138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:49790a26 cdw11:0a094949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.568 [2024-11-20 07:03:02.992163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.568 [2024-11-20 07:03:02.992238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00494979 cdw11:79797979 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.568 [2024-11-20 07:03:02.992252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.568 #43 NEW cov: 12467 ft: 15188 corp: 18/349b lim: 40 exec/s: 43 rss: 74Mb L: 22/33 MS: 1 ChangeByte- 00:06:58.568 [2024-11-20 07:03:03.052324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:49790a26 cdw11:0a494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.568 [2024-11-20 07:03:03.052349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.568 [2024-11-20 07:03:03.052422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:49494979 cdw11:7979493b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.568 [2024-11-20 07:03:03.052435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.568 #44 NEW cov: 12467 ft: 15204 corp: 19/368b lim: 40 exec/s: 44 rss: 74Mb L: 19/33 MS: 1 EraseBytes- 00:06:58.568 [2024-11-20 07:03:03.112306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:49797978 cdw11:7979793b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.568 [2024-11-20 07:03:03.112331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.827 #45 NEW cov: 12467 ft: 15215 corp: 20/376b lim: 40 exec/s: 45 rss: 75Mb L: 8/33 MS: 1 ChangeBit- 00:06:58.827 [2024-11-20 07:03:03.152930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:20000000 cdw11:001ae3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.827 [2024-11-20 07:03:03.152955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.827 [2024-11-20 07:03:03.153027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.827 [2024-11-20 07:03:03.153040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.827 [2024-11-20 07:03:03.153097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.827 [2024-11-20 07:03:03.153111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.827 [2024-11-20 07:03:03.153166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:e3e3e332 cdw11:e3e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.827 [2024-11-20 07:03:03.153179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.827 #46 NEW cov: 12467 ft: 15236 corp: 21/408b lim: 40 exec/s: 46 rss: 75Mb L: 32/33 MS: 1 InsertByte- 00:06:58.827 [2024-11-20 07:03:03.212932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:20000000 cdw11:1ae3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.827 [2024-11-20 07:03:03.212958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.827 [2024-11-20 07:03:03.213018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.827 [2024-11-20 07:03:03.213032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.827 [2024-11-20 07:03:03.213091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.827 [2024-11-20 07:03:03.213105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.827 #47 NEW cov: 12467 ft: 15247 corp: 22/439b lim: 40 exec/s: 47 rss: 75Mb L: 31/33 MS: 1 ShuffleBytes- 00:06:58.827 [2024-11-20 07:03:03.252714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:49797979 cdw11:79797939 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.827 [2024-11-20 07:03:03.252741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.827 #48 NEW cov: 12467 ft: 15315 corp: 23/449b lim: 40 exec/s: 48 rss: 75Mb L: 10/33 MS: 1 ChangeBit- 00:06:58.827 [2024-11-20 07:03:03.313224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:20000000 cdw11:1ae3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.827 [2024-11-20 07:03:03.313249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.827 [2024-11-20 07:03:03.313310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.827 [2024-11-20 07:03:03.313324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.827 [2024-11-20 07:03:03.313394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:e33be3e3 cdw11:e3e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.827 [2024-11-20 07:03:03.313412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.827 #49 NEW cov: 12467 ft: 15339 corp: 24/480b lim: 40 exec/s: 49 rss: 75Mb L: 31/33 MS: 1 ChangeByte- 00:06:58.827 [2024-11-20 07:03:03.352990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:b8b8b8b8 cdw11:b8b8b80a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.827 [2024-11-20 07:03:03.353016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.827 #50 NEW cov: 12467 ft: 15375 corp: 25/488b lim: 40 exec/s: 50 rss: 75Mb L: 8/33 MS: 1 InsertRepeatedBytes- 00:06:59.086 [2024-11-20 07:03:03.393119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:24497979 cdw11:79797979 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.086 [2024-11-20 07:03:03.393144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.086 #51 NEW cov: 12467 ft: 15394 corp: 26/499b lim: 40 exec/s: 51 rss: 75Mb L: 11/33 MS: 1 InsertByte- 00:06:59.086 [2024-11-20 07:03:03.453432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:49790a26 cdw11:0a094949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.086 [2024-11-20 07:03:03.453458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.086 [2024-11-20 07:03:03.453518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00494979 cdw11:0a497979 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.086 [2024-11-20 07:03:03.453532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.086 #52 NEW cov: 12467 ft: 15460 corp: 27/521b lim: 40 exec/s: 52 rss: 75Mb L: 22/33 MS: 1 ShuffleBytes- 00:06:59.086 [2024-11-20 07:03:03.513477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:49797979 cdw11:797979f9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.086 [2024-11-20 07:03:03.513503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.086 #53 NEW cov: 12467 ft: 15471 corp: 28/530b lim: 40 exec/s: 53 rss: 75Mb L: 9/33 MS: 1 ChangeBit- 00:06:59.086 [2024-11-20 07:03:03.553806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:49790a26 cdw11:0a010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.086 [2024-11-20 07:03:03.553831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.086 [2024-11-20 07:03:03.553909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00494979 cdw11:79797949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.086 [2024-11-20 07:03:03.553923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.086 #54 NEW cov: 12467 ft: 15496 corp: 29/548b lim: 40 exec/s: 54 rss: 75Mb L: 18/33 MS: 1 CMP- DE: "\001\000\000\000"- 00:06:59.086 [2024-11-20 07:03:03.594195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:20000000 cdw11:1ae3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.086 [2024-11-20 07:03:03.594221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.086 [2024-11-20 07:03:03.594282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffe3 cdw11:e3e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.086 [2024-11-20 07:03:03.594296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.086 [2024-11-20 07:03:03.594352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.086 [2024-11-20 07:03:03.594368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.086 [2024-11-20 07:03:03.594427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:e3e3e3e3 cdw11:e332e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.086 [2024-11-20 07:03:03.594441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.086 #55 NEW cov: 12467 ft: 15549 corp: 30/582b lim: 40 exec/s: 55 rss: 75Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:06:59.346 [2024-11-20 07:03:03.654050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:49790a26 cdw11:0a200000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.346 [2024-11-20 07:03:03.654075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.346 [2024-11-20 07:03:03.654147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00494979 cdw11:79797949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.346 [2024-11-20 07:03:03.654161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.346 #56 NEW cov: 12467 ft: 15626 corp: 31/600b lim: 40 exec/s: 56 rss: 75Mb L: 18/34 MS: 1 PersAutoDict- DE: " \000\000\000"- 00:06:59.346 [2024-11-20 07:03:03.714538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:49790a26 cdw11:0a494901 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.346 [2024-11-20 07:03:03.714565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.346 [2024-11-20 07:03:03.714625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000049 cdw11:49494979 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.346 [2024-11-20 07:03:03.714641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.346 [2024-11-20 07:03:03.714697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:79797949 cdw11:260a4949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.346 [2024-11-20 07:03:03.714711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.346 [2024-11-20 07:03:03.714767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:49494949 cdw11:79797979 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.346 [2024-11-20 07:03:03.714780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.346 #57 NEW cov: 12467 ft: 15635 corp: 32/637b lim: 40 exec/s: 57 rss: 75Mb L: 37/37 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:06:59.346 [2024-11-20 07:03:03.754166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:24497979 cdw11:79797979 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.346 [2024-11-20 07:03:03.754191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.346 #63 NEW cov: 12467 ft: 15652 corp: 33/648b lim: 40 exec/s: 63 rss: 75Mb L: 11/37 MS: 1 CopyPart- 00:06:59.346 [2024-11-20 07:03:03.814518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:260a4949 cdw11:4949491e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.346 [2024-11-20 07:03:03.814542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.346 [2024-11-20 07:03:03.814603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0a494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.346 [2024-11-20 07:03:03.814617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.346 #64 NEW cov: 12467 ft: 15657 corp: 34/671b lim: 40 exec/s: 64 rss: 75Mb L: 23/37 MS: 1 ChangeByte- 00:06:59.346 [2024-11-20 07:03:03.854424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:24497979 cdw11:79797979 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.346 [2024-11-20 07:03:03.854448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.346 #65 NEW cov: 12467 ft: 15661 corp: 35/682b lim: 40 exec/s: 32 rss: 75Mb L: 11/37 MS: 1 ChangeASCIIInt- 00:06:59.346 #65 DONE cov: 12467 ft: 15661 corp: 35/682b lim: 40 exec/s: 32 rss: 75Mb 00:06:59.346 ###### Recommended dictionary. ###### 00:06:59.346 " \000\000\000" # Uses: 3 00:06:59.346 "\001\000\000\000" # Uses: 1 00:06:59.346 ###### End of recommended dictionary. ###### 00:06:59.346 Done 65 runs in 2 second(s) 00:06:59.606 07:03:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:06:59.606 07:03:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:59.606 07:03:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:59.606 07:03:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:06:59.606 07:03:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:06:59.606 07:03:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:59.606 07:03:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:59.606 07:03:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:59.606 07:03:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:06:59.606 07:03:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:59.606 07:03:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:59.606 07:03:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:06:59.606 07:03:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:06:59.606 07:03:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:59.606 07:03:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:06:59.606 07:03:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:59.606 07:03:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:59.606 07:03:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:59.606 07:03:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:06:59.606 [2024-11-20 07:03:04.029186] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:06:59.606 [2024-11-20 07:03:04.029258] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3780015 ] 00:06:59.865 [2024-11-20 07:03:04.219176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.865 [2024-11-20 07:03:04.256603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.865 [2024-11-20 07:03:04.316042] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:59.865 [2024-11-20 07:03:04.332384] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:06:59.865 INFO: Running with entropic power schedule (0xFF, 100). 00:06:59.865 INFO: Seed: 4192949657 00:06:59.865 INFO: Loaded 1 modules (388174 inline 8-bit counters): 388174 [0x2c4d34c, 0x2cabf9a), 00:06:59.865 INFO: Loaded 1 PC tables (388174 PCs): 388174 [0x2cabfa0,0x3298480), 00:06:59.865 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:59.865 INFO: A corpus is not provided, starting from an empty corpus 00:06:59.865 #2 INITED exec/s: 0 rss: 65Mb 00:06:59.865 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:59.865 This may also happen if the target rejected all inputs we tried so far 00:06:59.865 [2024-11-20 07:03:04.397740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.865 [2024-11-20 07:03:04.397768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.383 NEW_FUNC[1/716]: 0x44c568 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:07:00.383 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:00.383 #11 NEW cov: 12219 ft: 12183 corp: 2/16b lim: 40 exec/s: 0 rss: 73Mb L: 15/15 MS: 4 CopyPart-CopyPart-CopyPart-InsertRepeatedBytes- 00:07:00.383 [2024-11-20 07:03:04.728786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:05000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.383 [2024-11-20 07:03:04.728838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.383 #12 NEW cov: 12350 ft: 12797 corp: 3/31b lim: 40 exec/s: 0 rss: 73Mb L: 15/15 MS: 1 CMP- DE: "\001\005"- 00:07:00.383 [2024-11-20 07:03:04.788690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.383 [2024-11-20 07:03:04.788717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.383 #13 NEW cov: 12356 ft: 13031 corp: 4/40b lim: 40 exec/s: 0 rss: 73Mb L: 9/15 MS: 1 EraseBytes- 00:07:00.383 [2024-11-20 07:03:04.848826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:05000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.383 [2024-11-20 07:03:04.848852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.383 #14 NEW cov: 12441 ft: 13295 corp: 5/50b lim: 40 exec/s: 0 rss: 73Mb L: 10/15 MS: 1 CrossOver- 00:07:00.383 [2024-11-20 07:03:04.888896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0e000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.383 [2024-11-20 07:03:04.888920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.383 #15 NEW cov: 12441 ft: 13404 corp: 6/65b lim: 40 exec/s: 0 rss: 73Mb L: 15/15 MS: 1 CMP- DE: "\016\000\000\000"- 00:07:00.384 [2024-11-20 07:03:04.929037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0e000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.384 [2024-11-20 07:03:04.929062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.643 #16 NEW cov: 12441 ft: 13446 corp: 7/80b lim: 40 exec/s: 0 rss: 73Mb L: 15/15 MS: 1 ChangeByte- 00:07:00.643 [2024-11-20 07:03:04.989763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0e000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.643 [2024-11-20 07:03:04.989788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.643 [2024-11-20 07:03:04.989841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:21212121 cdw11:21212121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.643 [2024-11-20 07:03:04.989855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.643 [2024-11-20 07:03:04.989911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:21212121 cdw11:21212121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.643 [2024-11-20 07:03:04.989940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.643 [2024-11-20 07:03:04.989996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:21212121 cdw11:21210000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.643 [2024-11-20 07:03:04.990009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.643 #17 NEW cov: 12441 ft: 14419 corp: 8/117b lim: 40 exec/s: 0 rss: 73Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:07:00.643 [2024-11-20 07:03:05.049379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:f2000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.643 [2024-11-20 07:03:05.049405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.643 #18 NEW cov: 12441 ft: 14449 corp: 9/132b lim: 40 exec/s: 0 rss: 73Mb L: 15/37 MS: 1 ChangeBinInt- 00:07:00.643 [2024-11-20 07:03:05.089464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0e000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.643 [2024-11-20 07:03:05.089489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.643 #19 NEW cov: 12441 ft: 14464 corp: 10/146b lim: 40 exec/s: 0 rss: 73Mb L: 14/37 MS: 1 EraseBytes- 00:07:00.643 [2024-11-20 07:03:05.129610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.643 [2024-11-20 07:03:05.129635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.643 #20 NEW cov: 12441 ft: 14508 corp: 11/161b lim: 40 exec/s: 0 rss: 73Mb L: 15/37 MS: 1 ShuffleBytes- 00:07:00.643 [2024-11-20 07:03:05.169729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000020 cdw11:f2000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.643 [2024-11-20 07:03:05.169754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.902 #21 NEW cov: 12441 ft: 14524 corp: 12/176b lim: 40 exec/s: 0 rss: 73Mb L: 15/37 MS: 1 ChangeBit- 00:07:00.902 [2024-11-20 07:03:05.230522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ba474747 cdw11:47474747 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.902 [2024-11-20 07:03:05.230546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.902 [2024-11-20 07:03:05.230617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:47474747 cdw11:47474747 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.902 [2024-11-20 07:03:05.230632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.902 [2024-11-20 07:03:05.230683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:47474747 cdw11:47474747 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.902 [2024-11-20 07:03:05.230697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.903 [2024-11-20 07:03:05.230749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:47474747 cdw11:47474747 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.903 [2024-11-20 07:03:05.230762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.903 [2024-11-20 07:03:05.230817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:47474747 cdw11:47474747 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.903 [2024-11-20 07:03:05.230831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:00.903 #24 NEW cov: 12441 ft: 14607 corp: 13/216b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 3 ShuffleBytes-ChangeByte-InsertRepeatedBytes- 00:07:00.903 [2024-11-20 07:03:05.270145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:60000000 cdw11:01050000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.903 [2024-11-20 07:03:05.270172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.903 [2024-11-20 07:03:05.270227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.903 [2024-11-20 07:03:05.270242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.903 NEW_FUNC[1/1]: 0x1c34f28 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:00.903 #25 NEW cov: 12464 ft: 14878 corp: 14/232b lim: 40 exec/s: 0 rss: 74Mb L: 16/40 MS: 1 InsertByte- 00:07:00.903 [2024-11-20 07:03:05.310420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.903 [2024-11-20 07:03:05.310446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.903 [2024-11-20 07:03:05.310517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000000e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.903 [2024-11-20 07:03:05.310530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.903 [2024-11-20 07:03:05.310584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000d00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.903 [2024-11-20 07:03:05.310607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.903 #26 NEW cov: 12464 ft: 15075 corp: 15/258b lim: 40 exec/s: 0 rss: 74Mb L: 26/40 MS: 1 InsertRepeatedBytes- 00:07:00.903 [2024-11-20 07:03:05.350505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0e000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.903 [2024-11-20 07:03:05.350530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.903 [2024-11-20 07:03:05.350584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:93939393 cdw11:93939393 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.903 [2024-11-20 07:03:05.350602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.903 [2024-11-20 07:03:05.350672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:93939393 cdw11:93939393 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.903 [2024-11-20 07:03:05.350685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.903 #27 NEW cov: 12464 ft: 15155 corp: 16/289b lim: 40 exec/s: 0 rss: 74Mb L: 31/40 MS: 1 InsertRepeatedBytes- 00:07:00.903 [2024-11-20 07:03:05.390664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0e000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.903 [2024-11-20 07:03:05.390689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.903 [2024-11-20 07:03:05.390760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000e00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.903 [2024-11-20 07:03:05.390776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.903 [2024-11-20 07:03:05.390835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.903 [2024-11-20 07:03:05.390847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.903 #28 NEW cov: 12464 ft: 15184 corp: 17/315b lim: 40 exec/s: 28 rss: 74Mb L: 26/40 MS: 1 CopyPart- 00:07:00.903 [2024-11-20 07:03:05.430450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000a00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.903 [2024-11-20 07:03:05.430474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.162 #29 NEW cov: 12464 ft: 15226 corp: 18/328b lim: 40 exec/s: 29 rss: 74Mb L: 13/40 MS: 1 EraseBytes- 00:07:01.162 [2024-11-20 07:03:05.490663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:05000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.162 [2024-11-20 07:03:05.490687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.162 #30 NEW cov: 12464 ft: 15261 corp: 19/338b lim: 40 exec/s: 30 rss: 74Mb L: 10/40 MS: 1 EraseBytes- 00:07:01.162 [2024-11-20 07:03:05.531026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0e000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.162 [2024-11-20 07:03:05.531051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.162 [2024-11-20 07:03:05.531122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:93939393 cdw11:93939393 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.162 [2024-11-20 07:03:05.531135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.162 [2024-11-20 07:03:05.531190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:93939393 cdw11:99939393 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.162 [2024-11-20 07:03:05.531203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.162 #31 NEW cov: 12464 ft: 15334 corp: 20/369b lim: 40 exec/s: 31 rss: 74Mb L: 31/40 MS: 1 ChangeByte- 00:07:01.162 [2024-11-20 07:03:05.590896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:5d050000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.162 [2024-11-20 07:03:05.590921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.162 #32 NEW cov: 12464 ft: 15346 corp: 21/380b lim: 40 exec/s: 32 rss: 74Mb L: 11/40 MS: 1 InsertByte- 00:07:01.162 [2024-11-20 07:03:05.651052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00ff000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.162 [2024-11-20 07:03:05.651076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.162 #33 NEW cov: 12464 ft: 15391 corp: 22/394b lim: 40 exec/s: 33 rss: 74Mb L: 14/40 MS: 1 InsertByte- 00:07:01.162 [2024-11-20 07:03:05.711529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0e000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.162 [2024-11-20 07:03:05.711554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.162 [2024-11-20 07:03:05.711630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:93939393 cdw11:93939393 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.162 [2024-11-20 07:03:05.711648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.162 [2024-11-20 07:03:05.711713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:93939393 cdw11:01059393 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.162 [2024-11-20 07:03:05.711726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.421 #34 NEW cov: 12464 ft: 15404 corp: 23/425b lim: 40 exec/s: 34 rss: 74Mb L: 31/40 MS: 1 PersAutoDict- DE: "\001\005"- 00:07:01.421 [2024-11-20 07:03:05.751806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0e000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.421 [2024-11-20 07:03:05.751831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.421 [2024-11-20 07:03:05.751885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.421 [2024-11-20 07:03:05.751899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.421 [2024-11-20 07:03:05.751950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.421 [2024-11-20 07:03:05.751962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.421 [2024-11-20 07:03:05.752017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.421 [2024-11-20 07:03:05.752029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.421 #35 NEW cov: 12464 ft: 15441 corp: 24/457b lim: 40 exec/s: 35 rss: 74Mb L: 32/40 MS: 1 InsertRepeatedBytes- 00:07:01.421 [2024-11-20 07:03:05.791421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000105 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.421 [2024-11-20 07:03:05.791446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.421 #36 NEW cov: 12464 ft: 15466 corp: 25/470b lim: 40 exec/s: 36 rss: 74Mb L: 13/40 MS: 1 CrossOver- 00:07:01.421 [2024-11-20 07:03:05.831527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000000e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.421 [2024-11-20 07:03:05.831551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.421 #37 NEW cov: 12464 ft: 15495 corp: 26/484b lim: 40 exec/s: 37 rss: 74Mb L: 14/40 MS: 1 CrossOver- 00:07:01.422 [2024-11-20 07:03:05.891822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000000e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.422 [2024-11-20 07:03:05.891848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.422 #38 NEW cov: 12464 ft: 15514 corp: 27/499b lim: 40 exec/s: 38 rss: 74Mb L: 15/40 MS: 1 CopyPart- 00:07:01.422 [2024-11-20 07:03:05.951915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0e000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.422 [2024-11-20 07:03:05.951940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.422 #39 NEW cov: 12464 ft: 15566 corp: 28/513b lim: 40 exec/s: 39 rss: 74Mb L: 14/40 MS: 1 ShuffleBytes- 00:07:01.681 [2024-11-20 07:03:05.991997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000a00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.681 [2024-11-20 07:03:05.992024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.681 #40 NEW cov: 12464 ft: 15579 corp: 29/526b lim: 40 exec/s: 40 rss: 74Mb L: 13/40 MS: 1 ChangeBinInt- 00:07:01.681 [2024-11-20 07:03:06.032751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ba472747 cdw11:47474747 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.681 [2024-11-20 07:03:06.032776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.681 [2024-11-20 07:03:06.032830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:47474747 cdw11:47474747 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.681 [2024-11-20 07:03:06.032843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.681 [2024-11-20 07:03:06.032895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:47474747 cdw11:47474747 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.681 [2024-11-20 07:03:06.032910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.681 [2024-11-20 07:03:06.032964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:47474747 cdw11:47474747 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.681 [2024-11-20 07:03:06.032976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.681 [2024-11-20 07:03:06.033029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:47474747 cdw11:47474747 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.681 [2024-11-20 07:03:06.033042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:01.681 #41 NEW cov: 12464 ft: 15583 corp: 30/566b lim: 40 exec/s: 41 rss: 74Mb L: 40/40 MS: 1 ChangeByte- 00:07:01.681 [2024-11-20 07:03:06.092352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000a05 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.681 [2024-11-20 07:03:06.092376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.681 #42 NEW cov: 12464 ft: 15588 corp: 31/579b lim: 40 exec/s: 42 rss: 75Mb L: 13/40 MS: 1 ChangeBinInt- 00:07:01.681 [2024-11-20 07:03:06.152785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:000e0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.681 [2024-11-20 07:03:06.152809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.681 [2024-11-20 07:03:06.152877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:93939393 cdw11:93939393 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.681 [2024-11-20 07:03:06.152891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.681 [2024-11-20 07:03:06.152946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:93939393 cdw11:01059393 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.681 [2024-11-20 07:03:06.152959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.681 #43 NEW cov: 12464 ft: 15596 corp: 32/610b lim: 40 exec/s: 43 rss: 75Mb L: 31/40 MS: 1 PersAutoDict- DE: "\016\000\000\000"- 00:07:01.681 [2024-11-20 07:03:06.212847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0e000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.681 [2024-11-20 07:03:06.212872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.681 [2024-11-20 07:03:06.212929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.681 [2024-11-20 07:03:06.212943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.940 #44 NEW cov: 12464 ft: 15605 corp: 33/628b lim: 40 exec/s: 44 rss: 75Mb L: 18/40 MS: 1 EraseBytes- 00:07:01.940 [2024-11-20 07:03:06.272836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0e000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.940 [2024-11-20 07:03:06.272860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.940 #45 NEW cov: 12464 ft: 15615 corp: 34/643b lim: 40 exec/s: 45 rss: 75Mb L: 15/40 MS: 1 ShuffleBytes- 00:07:01.940 [2024-11-20 07:03:06.313045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00ff000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.940 [2024-11-20 07:03:06.313071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.940 [2024-11-20 07:03:06.313125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000a0a00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.940 [2024-11-20 07:03:06.313138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.940 #46 NEW cov: 12464 ft: 15623 corp: 35/665b lim: 40 exec/s: 46 rss: 75Mb L: 22/40 MS: 1 CrossOver- 00:07:01.940 [2024-11-20 07:03:06.373089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000000e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.940 [2024-11-20 07:03:06.373114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.940 #47 NEW cov: 12464 ft: 15667 corp: 36/680b lim: 40 exec/s: 23 rss: 75Mb L: 15/40 MS: 1 PersAutoDict- DE: "\016\000\000\000"- 00:07:01.940 #47 DONE cov: 12464 ft: 15667 corp: 36/680b lim: 40 exec/s: 23 rss: 75Mb 00:07:01.940 ###### Recommended dictionary. ###### 00:07:01.940 "\001\005" # Uses: 1 00:07:01.940 "\016\000\000\000" # Uses: 2 00:07:01.940 ###### End of recommended dictionary. ###### 00:07:01.940 Done 47 runs in 2 second(s) 00:07:02.199 07:03:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:07:02.200 07:03:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:02.200 07:03:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:02.200 07:03:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:07:02.200 07:03:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:07:02.200 07:03:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:02.200 07:03:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:02.200 07:03:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:02.200 07:03:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:07:02.200 07:03:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:02.200 07:03:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:02.200 07:03:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:07:02.200 07:03:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:07:02.200 07:03:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:02.200 07:03:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:07:02.200 07:03:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:02.200 07:03:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:02.200 07:03:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:02.200 07:03:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:07:02.200 [2024-11-20 07:03:06.565079] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:07:02.200 [2024-11-20 07:03:06.565146] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3780408 ] 00:07:02.459 [2024-11-20 07:03:06.755358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.459 [2024-11-20 07:03:06.792990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.459 [2024-11-20 07:03:06.852524] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:02.459 [2024-11-20 07:03:06.868874] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:07:02.459 INFO: Running with entropic power schedule (0xFF, 100). 00:07:02.459 INFO: Seed: 2433988435 00:07:02.459 INFO: Loaded 1 modules (388174 inline 8-bit counters): 388174 [0x2c4d34c, 0x2cabf9a), 00:07:02.459 INFO: Loaded 1 PC tables (388174 PCs): 388174 [0x2cabfa0,0x3298480), 00:07:02.459 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:02.459 INFO: A corpus is not provided, starting from an empty corpus 00:07:02.459 #2 INITED exec/s: 0 rss: 66Mb 00:07:02.459 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:02.459 This may also happen if the target rejected all inputs we tried so far 00:07:02.459 [2024-11-20 07:03:06.945342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:35353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.459 [2024-11-20 07:03:06.945379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.459 [2024-11-20 07:03:06.945517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:35353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.459 [2024-11-20 07:03:06.945536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.718 NEW_FUNC[1/715]: 0x44e138 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:07:02.718 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:02.718 #6 NEW cov: 12225 ft: 12227 corp: 2/24b lim: 40 exec/s: 0 rss: 73Mb L: 23/23 MS: 4 ChangeBit-CopyPart-InsertByte-InsertRepeatedBytes- 00:07:02.976 [2024-11-20 07:03:07.296291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:35353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.976 [2024-11-20 07:03:07.296339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.976 [2024-11-20 07:03:07.296475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:35353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.976 [2024-11-20 07:03:07.296497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.976 #7 NEW cov: 12339 ft: 12890 corp: 3/47b lim: 40 exec/s: 0 rss: 73Mb L: 23/23 MS: 1 ChangeASCIIInt- 00:07:02.976 [2024-11-20 07:03:07.366854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:35353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.976 [2024-11-20 07:03:07.366887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.976 [2024-11-20 07:03:07.367023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:35353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.976 [2024-11-20 07:03:07.367040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.976 [2024-11-20 07:03:07.367176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:35353535 cdw11:35083000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.976 [2024-11-20 07:03:07.367193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.976 [2024-11-20 07:03:07.367332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.976 [2024-11-20 07:03:07.367350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.976 #8 NEW cov: 12345 ft: 13595 corp: 4/85b lim: 40 exec/s: 0 rss: 73Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:07:02.976 [2024-11-20 07:03:07.436580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:35353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.976 [2024-11-20 07:03:07.436614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.976 [2024-11-20 07:03:07.436750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:35353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.976 [2024-11-20 07:03:07.436768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.976 #9 NEW cov: 12430 ft: 13824 corp: 5/108b lim: 40 exec/s: 0 rss: 73Mb L: 23/38 MS: 1 ChangeASCIIInt- 00:07:02.976 [2024-11-20 07:03:07.486954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:10003535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.976 [2024-11-20 07:03:07.486985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.976 [2024-11-20 07:03:07.487119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:35353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.976 [2024-11-20 07:03:07.487137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.976 [2024-11-20 07:03:07.487272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:35353535 cdw11:35353508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.976 [2024-11-20 07:03:07.487292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.976 #10 NEW cov: 12430 ft: 14048 corp: 6/133b lim: 40 exec/s: 0 rss: 73Mb L: 25/38 MS: 1 CMP- DE: "\020\000"- 00:07:03.236 [2024-11-20 07:03:07.537199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:8a949494 cdw11:94949494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.236 [2024-11-20 07:03:07.537229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.236 [2024-11-20 07:03:07.537367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:94949494 cdw11:94949494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.236 [2024-11-20 07:03:07.537385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.236 [2024-11-20 07:03:07.537512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:94949494 cdw11:94949494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.236 [2024-11-20 07:03:07.537529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.236 #12 NEW cov: 12430 ft: 14121 corp: 7/163b lim: 40 exec/s: 0 rss: 73Mb L: 30/38 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:03.236 [2024-11-20 07:03:07.587225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:8a949494 cdw11:94949494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.236 [2024-11-20 07:03:07.587254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.236 [2024-11-20 07:03:07.587387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:94949494 cdw11:94949494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.236 [2024-11-20 07:03:07.587405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.236 [2024-11-20 07:03:07.587534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:94949494 cdw11:94949494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.236 [2024-11-20 07:03:07.587559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.236 #13 NEW cov: 12430 ft: 14162 corp: 8/193b lim: 40 exec/s: 0 rss: 74Mb L: 30/38 MS: 1 ChangeByte- 00:07:03.236 [2024-11-20 07:03:07.657447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:35353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.236 [2024-11-20 07:03:07.657474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.236 [2024-11-20 07:03:07.657605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:35353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.236 [2024-11-20 07:03:07.657623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.236 [2024-11-20 07:03:07.657755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:35353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.236 [2024-11-20 07:03:07.657772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.236 #14 NEW cov: 12430 ft: 14202 corp: 9/222b lim: 40 exec/s: 0 rss: 74Mb L: 29/38 MS: 1 CopyPart- 00:07:03.236 [2024-11-20 07:03:07.707587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:35353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.236 [2024-11-20 07:03:07.707622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.236 [2024-11-20 07:03:07.707774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:35353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.236 [2024-11-20 07:03:07.707792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.236 [2024-11-20 07:03:07.707927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:35353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.236 [2024-11-20 07:03:07.707946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.236 #15 NEW cov: 12430 ft: 14283 corp: 10/251b lim: 40 exec/s: 0 rss: 74Mb L: 29/38 MS: 1 CopyPart- 00:07:03.236 [2024-11-20 07:03:07.777782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:10003535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.236 [2024-11-20 07:03:07.777814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.236 [2024-11-20 07:03:07.777941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:35353515 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.236 [2024-11-20 07:03:07.777958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.236 [2024-11-20 07:03:07.778094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:35353535 cdw11:35353508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.236 [2024-11-20 07:03:07.778113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.494 NEW_FUNC[1/1]: 0x1c34f28 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:03.494 #16 NEW cov: 12453 ft: 14338 corp: 11/276b lim: 40 exec/s: 0 rss: 74Mb L: 25/38 MS: 1 ChangeBit- 00:07:03.494 [2024-11-20 07:03:07.848088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:10003535 cdw11:41353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.494 [2024-11-20 07:03:07.848118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.494 [2024-11-20 07:03:07.848253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:35353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.494 [2024-11-20 07:03:07.848271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.494 [2024-11-20 07:03:07.848405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:35353535 cdw11:35353508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.494 [2024-11-20 07:03:07.848425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.494 #17 NEW cov: 12453 ft: 14339 corp: 12/301b lim: 40 exec/s: 0 rss: 74Mb L: 25/38 MS: 1 ChangeByte- 00:07:03.494 [2024-11-20 07:03:07.898216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:35353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.494 [2024-11-20 07:03:07.898245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.494 [2024-11-20 07:03:07.898387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:3535352b cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.494 [2024-11-20 07:03:07.898403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.494 [2024-11-20 07:03:07.898529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:35353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.494 [2024-11-20 07:03:07.898547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.494 #18 NEW cov: 12453 ft: 14357 corp: 13/331b lim: 40 exec/s: 18 rss: 74Mb L: 30/38 MS: 1 InsertByte- 00:07:03.494 [2024-11-20 07:03:07.968359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:10003535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.494 [2024-11-20 07:03:07.968387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.494 [2024-11-20 07:03:07.968524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:35353515 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.494 [2024-11-20 07:03:07.968542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.494 [2024-11-20 07:03:07.968686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:35353535 cdw11:35353508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.494 [2024-11-20 07:03:07.968704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.494 #19 NEW cov: 12453 ft: 14376 corp: 14/356b lim: 40 exec/s: 19 rss: 74Mb L: 25/38 MS: 1 CrossOver- 00:07:03.494 [2024-11-20 07:03:08.038680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:8a949494 cdw11:94949494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.495 [2024-11-20 07:03:08.038712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.495 [2024-11-20 07:03:08.038849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:94949494 cdw11:94959494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.495 [2024-11-20 07:03:08.038868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.495 [2024-11-20 07:03:08.039007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:94949494 cdw11:94949494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.495 [2024-11-20 07:03:08.039026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.753 #20 NEW cov: 12453 ft: 14395 corp: 15/386b lim: 40 exec/s: 20 rss: 74Mb L: 30/38 MS: 1 ChangeBit- 00:07:03.753 [2024-11-20 07:03:08.089047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:35353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.753 [2024-11-20 07:03:08.089077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.753 [2024-11-20 07:03:08.089209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:35353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.753 [2024-11-20 07:03:08.089227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.753 [2024-11-20 07:03:08.089360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:35353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.753 [2024-11-20 07:03:08.089378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.753 [2024-11-20 07:03:08.089509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:35353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.753 [2024-11-20 07:03:08.089529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.753 #21 NEW cov: 12453 ft: 14416 corp: 16/420b lim: 40 exec/s: 21 rss: 74Mb L: 34/38 MS: 1 CopyPart- 00:07:03.753 [2024-11-20 07:03:08.138456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:35353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.753 [2024-11-20 07:03:08.138485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.753 #22 NEW cov: 12453 ft: 14843 corp: 17/433b lim: 40 exec/s: 22 rss: 74Mb L: 13/38 MS: 1 EraseBytes- 00:07:03.753 [2024-11-20 07:03:08.189117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:10003535 cdw11:41353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.753 [2024-11-20 07:03:08.189148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.753 [2024-11-20 07:03:08.189282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:35353535 cdw11:00353541 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.753 [2024-11-20 07:03:08.189312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.753 [2024-11-20 07:03:08.189439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:35353535 cdw11:35353508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.753 [2024-11-20 07:03:08.189459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.753 #23 NEW cov: 12453 ft: 14872 corp: 18/458b lim: 40 exec/s: 23 rss: 74Mb L: 25/38 MS: 1 CopyPart- 00:07:03.753 [2024-11-20 07:03:08.239129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:35353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.753 [2024-11-20 07:03:08.239159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.753 [2024-11-20 07:03:08.239289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:35353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.753 [2024-11-20 07:03:08.239307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.753 #24 NEW cov: 12453 ft: 14955 corp: 19/481b lim: 40 exec/s: 24 rss: 74Mb L: 23/38 MS: 1 ShuffleBytes- 00:07:03.753 [2024-11-20 07:03:08.289459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:8a949494 cdw11:94949494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.753 [2024-11-20 07:03:08.289488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.753 [2024-11-20 07:03:08.289621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:94949494 cdw11:94959494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.753 [2024-11-20 07:03:08.289638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.753 [2024-11-20 07:03:08.289771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:94949494 cdw11:94949494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.754 [2024-11-20 07:03:08.289790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.012 #25 NEW cov: 12453 ft: 14971 corp: 20/512b lim: 40 exec/s: 25 rss: 74Mb L: 31/38 MS: 1 CrossOver- 00:07:04.012 [2024-11-20 07:03:08.359795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:35353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.012 [2024-11-20 07:03:08.359826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.012 [2024-11-20 07:03:08.359964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:35353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.012 [2024-11-20 07:03:08.359982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.012 [2024-11-20 07:03:08.360114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:35353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.012 [2024-11-20 07:03:08.360133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.012 [2024-11-20 07:03:08.360270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:35353535 cdw11:35353508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.012 [2024-11-20 07:03:08.360288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.012 #26 NEW cov: 12453 ft: 14981 corp: 21/545b lim: 40 exec/s: 26 rss: 74Mb L: 33/38 MS: 1 CopyPart- 00:07:04.012 [2024-11-20 07:03:08.409665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:35010000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.012 [2024-11-20 07:03:08.409692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.012 [2024-11-20 07:03:08.409828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:01353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.012 [2024-11-20 07:03:08.409846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.012 [2024-11-20 07:03:08.409982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:35353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.012 [2024-11-20 07:03:08.409999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.012 #27 NEW cov: 12453 ft: 15050 corp: 22/576b lim: 40 exec/s: 27 rss: 74Mb L: 31/38 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\001"- 00:07:04.012 [2024-11-20 07:03:08.459798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:8a949494 cdw11:94949494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.012 [2024-11-20 07:03:08.459827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.012 [2024-11-20 07:03:08.459966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:94949494 cdw11:94949494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.012 [2024-11-20 07:03:08.459984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.012 [2024-11-20 07:03:08.460115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:94949494 cdw11:94100094 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.012 [2024-11-20 07:03:08.460135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.012 #28 NEW cov: 12453 ft: 15103 corp: 23/606b lim: 40 exec/s: 28 rss: 74Mb L: 30/38 MS: 1 CrossOver- 00:07:04.012 [2024-11-20 07:03:08.510016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:8a949494 cdw11:94949494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.012 [2024-11-20 07:03:08.510042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.012 [2024-11-20 07:03:08.510189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:94949494 cdw11:94959494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.012 [2024-11-20 07:03:08.510208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.012 [2024-11-20 07:03:08.510340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:94949494 cdw11:94949494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.012 [2024-11-20 07:03:08.510358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.012 #29 NEW cov: 12453 ft: 15179 corp: 24/636b lim: 40 exec/s: 29 rss: 74Mb L: 30/38 MS: 1 ShuffleBytes- 00:07:04.012 [2024-11-20 07:03:08.560640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3535ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.013 [2024-11-20 07:03:08.560669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.013 [2024-11-20 07:03:08.560798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.013 [2024-11-20 07:03:08.560819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.013 [2024-11-20 07:03:08.560954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.013 [2024-11-20 07:03:08.560970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.013 [2024-11-20 07:03:08.561098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.013 [2024-11-20 07:03:08.561115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.013 [2024-11-20 07:03:08.561245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:35353535 cdw11:35350830 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.013 [2024-11-20 07:03:08.561263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.271 #30 NEW cov: 12453 ft: 15267 corp: 25/676b lim: 40 exec/s: 30 rss: 74Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:07:04.271 [2024-11-20 07:03:08.630470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ab100035 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.271 [2024-11-20 07:03:08.630498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.271 [2024-11-20 07:03:08.630636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:35353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.271 [2024-11-20 07:03:08.630669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.271 [2024-11-20 07:03:08.630805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:35353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.271 [2024-11-20 07:03:08.630821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.271 #31 NEW cov: 12453 ft: 15273 corp: 26/702b lim: 40 exec/s: 31 rss: 74Mb L: 26/40 MS: 1 InsertByte- 00:07:04.271 [2024-11-20 07:03:08.680336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:10003535 cdw11:41353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.271 [2024-11-20 07:03:08.680366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.271 [2024-11-20 07:03:08.680503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:35353535 cdw11:00353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.271 [2024-11-20 07:03:08.680521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.271 #32 NEW cov: 12453 ft: 15276 corp: 27/723b lim: 40 exec/s: 32 rss: 74Mb L: 21/40 MS: 1 EraseBytes- 00:07:04.271 [2024-11-20 07:03:08.750509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:67565656 cdw11:56565656 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.271 [2024-11-20 07:03:08.750554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.271 [2024-11-20 07:03:08.750686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:56565656 cdw11:56565656 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.271 [2024-11-20 07:03:08.750707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.271 #37 NEW cov: 12453 ft: 15294 corp: 28/746b lim: 40 exec/s: 37 rss: 74Mb L: 23/40 MS: 5 ChangeBit-ChangeByte-InsertRepeatedBytes-CopyPart-InsertRepeatedBytes- 00:07:04.271 [2024-11-20 07:03:08.800993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:35353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.271 [2024-11-20 07:03:08.801022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.271 [2024-11-20 07:03:08.801151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:35353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.271 [2024-11-20 07:03:08.801170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.271 [2024-11-20 07:03:08.801295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:35353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.271 [2024-11-20 07:03:08.801312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.271 #38 NEW cov: 12453 ft: 15309 corp: 29/776b lim: 40 exec/s: 38 rss: 74Mb L: 30/40 MS: 1 InsertByte- 00:07:04.530 [2024-11-20 07:03:08.850869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:35353535 cdw11:35353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.530 [2024-11-20 07:03:08.850900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.530 [2024-11-20 07:03:08.851029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:35353535 cdw11:75353535 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.530 [2024-11-20 07:03:08.851048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.530 #39 NEW cov: 12453 ft: 15316 corp: 30/799b lim: 40 exec/s: 39 rss: 75Mb L: 23/40 MS: 1 ChangeBit- 00:07:04.530 [2024-11-20 07:03:08.921342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:8a949494 cdw11:94949494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.530 [2024-11-20 07:03:08.921371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.530 [2024-11-20 07:03:08.921516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:94949494 cdw11:94959494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.530 [2024-11-20 07:03:08.921534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.530 [2024-11-20 07:03:08.921666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:94949494 cdw11:94949494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.530 [2024-11-20 07:03:08.921684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.530 #40 NEW cov: 12453 ft: 15322 corp: 31/830b lim: 40 exec/s: 20 rss: 75Mb L: 31/40 MS: 1 ShuffleBytes- 00:07:04.531 #40 DONE cov: 12453 ft: 15322 corp: 31/830b lim: 40 exec/s: 20 rss: 75Mb 00:07:04.531 ###### Recommended dictionary. ###### 00:07:04.531 "\020\000" # Uses: 0 00:07:04.531 "\001\000\000\000\000\000\000\001" # Uses: 0 00:07:04.531 ###### End of recommended dictionary. ###### 00:07:04.531 Done 40 runs in 2 second(s) 00:07:04.531 07:03:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:07:04.531 07:03:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:04.531 07:03:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:04.531 07:03:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:07:04.531 07:03:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:07:04.531 07:03:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:04.531 07:03:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:04.531 07:03:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:04.531 07:03:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:07:04.531 07:03:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:04.531 07:03:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:04.531 07:03:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:07:04.531 07:03:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:07:04.531 07:03:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:04.531 07:03:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:07:04.531 07:03:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:04.791 07:03:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:04.791 07:03:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:04.791 07:03:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:07:04.791 [2024-11-20 07:03:09.116944] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:07:04.791 [2024-11-20 07:03:09.117008] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3781232 ] 00:07:04.791 [2024-11-20 07:03:09.304054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.791 [2024-11-20 07:03:09.341695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.051 [2024-11-20 07:03:09.401268] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:05.051 [2024-11-20 07:03:09.417593] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:07:05.051 INFO: Running with entropic power schedule (0xFF, 100). 00:07:05.051 INFO: Seed: 688008345 00:07:05.051 INFO: Loaded 1 modules (388174 inline 8-bit counters): 388174 [0x2c4d34c, 0x2cabf9a), 00:07:05.051 INFO: Loaded 1 PC tables (388174 PCs): 388174 [0x2cabfa0,0x3298480), 00:07:05.051 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:05.051 INFO: A corpus is not provided, starting from an empty corpus 00:07:05.051 #2 INITED exec/s: 0 rss: 65Mb 00:07:05.051 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:05.051 This may also happen if the target rejected all inputs we tried so far 00:07:05.051 [2024-11-20 07:03:09.484639] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.051 [2024-11-20 07:03:09.484681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.051 [2024-11-20 07:03:09.484829] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.051 [2024-11-20 07:03:09.484856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.051 [2024-11-20 07:03:09.484981] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.051 [2024-11-20 07:03:09.485006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.051 [2024-11-20 07:03:09.485139] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.051 [2024-11-20 07:03:09.485166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.310 NEW_FUNC[1/717]: 0x44fd08 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:07:05.310 NEW_FUNC[2/717]: 0x471258 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:05.310 #3 NEW cov: 12230 ft: 12227 corp: 2/30b lim: 35 exec/s: 0 rss: 73Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:07:05.310 [2024-11-20 07:03:09.835346] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.310 [2024-11-20 07:03:09.835391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.310 [2024-11-20 07:03:09.835539] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.310 [2024-11-20 07:03:09.835561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.310 [2024-11-20 07:03:09.835695] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.310 [2024-11-20 07:03:09.835716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.310 #4 NEW cov: 12350 ft: 13161 corp: 3/51b lim: 35 exec/s: 0 rss: 73Mb L: 21/29 MS: 1 InsertRepeatedBytes- 00:07:05.569 [2024-11-20 07:03:09.875251] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.569 [2024-11-20 07:03:09.875279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.569 [2024-11-20 07:03:09.875419] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.569 [2024-11-20 07:03:09.875435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.569 [2024-11-20 07:03:09.875568] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.569 [2024-11-20 07:03:09.875585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.569 #5 NEW cov: 12356 ft: 13349 corp: 4/72b lim: 35 exec/s: 0 rss: 73Mb L: 21/29 MS: 1 ShuffleBytes- 00:07:05.569 [2024-11-20 07:03:09.935724] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.569 [2024-11-20 07:03:09.935758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.569 [2024-11-20 07:03:09.935893] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.569 [2024-11-20 07:03:09.935913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.569 [2024-11-20 07:03:09.936051] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.569 [2024-11-20 07:03:09.936074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.569 [2024-11-20 07:03:09.936204] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.569 [2024-11-20 07:03:09.936229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.569 #6 NEW cov: 12441 ft: 13639 corp: 5/101b lim: 35 exec/s: 0 rss: 73Mb L: 29/29 MS: 1 ChangeByte- 00:07:05.569 [2024-11-20 07:03:09.995604] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.569 [2024-11-20 07:03:09.995633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.569 [2024-11-20 07:03:09.995756] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.569 [2024-11-20 07:03:09.995773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.569 [2024-11-20 07:03:09.995898] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.569 [2024-11-20 07:03:09.995914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.569 #7 NEW cov: 12441 ft: 13686 corp: 6/122b lim: 35 exec/s: 0 rss: 73Mb L: 21/29 MS: 1 ShuffleBytes- 00:07:05.569 [2024-11-20 07:03:10.065995] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.569 [2024-11-20 07:03:10.066025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.569 [2024-11-20 07:03:10.066178] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.569 [2024-11-20 07:03:10.066197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.569 [2024-11-20 07:03:10.066334] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.569 [2024-11-20 07:03:10.066353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.569 #8 NEW cov: 12441 ft: 13841 corp: 7/144b lim: 35 exec/s: 0 rss: 73Mb L: 22/29 MS: 1 InsertByte- 00:07:05.569 [2024-11-20 07:03:10.116155] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.569 [2024-11-20 07:03:10.116185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.569 [2024-11-20 07:03:10.116316] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.569 [2024-11-20 07:03:10.116335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.569 [2024-11-20 07:03:10.116476] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.569 [2024-11-20 07:03:10.116495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.828 #9 NEW cov: 12441 ft: 13926 corp: 8/167b lim: 35 exec/s: 0 rss: 73Mb L: 23/29 MS: 1 InsertByte- 00:07:05.828 [2024-11-20 07:03:10.186270] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.828 [2024-11-20 07:03:10.186301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.828 [2024-11-20 07:03:10.186429] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.828 [2024-11-20 07:03:10.186450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.828 [2024-11-20 07:03:10.186583] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.828 [2024-11-20 07:03:10.186605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.828 #10 NEW cov: 12441 ft: 13950 corp: 9/189b lim: 35 exec/s: 0 rss: 73Mb L: 22/29 MS: 1 InsertByte- 00:07:05.828 [2024-11-20 07:03:10.247004] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.828 [2024-11-20 07:03:10.247037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.828 [2024-11-20 07:03:10.247170] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.828 [2024-11-20 07:03:10.247191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.828 [2024-11-20 07:03:10.247327] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.828 [2024-11-20 07:03:10.247352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.828 [2024-11-20 07:03:10.247474] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.828 [2024-11-20 07:03:10.247498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.828 [2024-11-20 07:03:10.247633] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.828 [2024-11-20 07:03:10.247656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:05.828 #11 NEW cov: 12441 ft: 14105 corp: 10/224b lim: 35 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:05.828 [2024-11-20 07:03:10.316604] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.828 [2024-11-20 07:03:10.316632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.828 [2024-11-20 07:03:10.316764] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.828 [2024-11-20 07:03:10.316783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.828 [2024-11-20 07:03:10.316914] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.828 [2024-11-20 07:03:10.316931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.828 NEW_FUNC[1/1]: 0x1c34f28 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:05.828 #12 NEW cov: 12464 ft: 14179 corp: 11/246b lim: 35 exec/s: 0 rss: 74Mb L: 22/35 MS: 1 ChangeByte- 00:07:06.087 [2024-11-20 07:03:10.387210] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.087 [2024-11-20 07:03:10.387242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.087 [2024-11-20 07:03:10.387375] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.087 [2024-11-20 07:03:10.387393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.087 [2024-11-20 07:03:10.387517] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.087 [2024-11-20 07:03:10.387537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.087 NEW_FUNC[1/1]: 0x1380218 in nvmf_ctrlr_set_features_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1766 00:07:06.087 #13 NEW cov: 12487 ft: 14237 corp: 12/276b lim: 35 exec/s: 0 rss: 74Mb L: 30/35 MS: 1 InsertByte- 00:07:06.087 [2024-11-20 07:03:10.447031] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.087 [2024-11-20 07:03:10.447059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.087 [2024-11-20 07:03:10.447197] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.087 [2024-11-20 07:03:10.447215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.087 [2024-11-20 07:03:10.447349] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.087 [2024-11-20 07:03:10.447368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.087 #14 NEW cov: 12487 ft: 14286 corp: 13/299b lim: 35 exec/s: 14 rss: 74Mb L: 23/35 MS: 1 CopyPart- 00:07:06.087 [2024-11-20 07:03:10.517640] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.087 [2024-11-20 07:03:10.517677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.087 [2024-11-20 07:03:10.517812] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.087 [2024-11-20 07:03:10.517836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.087 [2024-11-20 07:03:10.517971] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.087 [2024-11-20 07:03:10.517995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.087 #15 NEW cov: 12487 ft: 14295 corp: 14/329b lim: 35 exec/s: 15 rss: 74Mb L: 30/35 MS: 1 ShuffleBytes- 00:07:06.087 [2024-11-20 07:03:10.586925] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000008e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.087 [2024-11-20 07:03:10.586961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.087 #17 NEW cov: 12487 ft: 15005 corp: 15/340b lim: 35 exec/s: 17 rss: 74Mb L: 11/35 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:06.087 [2024-11-20 07:03:10.637947] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.087 [2024-11-20 07:03:10.637980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.087 [2024-11-20 07:03:10.638112] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.087 [2024-11-20 07:03:10.638135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.087 [2024-11-20 07:03:10.638263] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.087 [2024-11-20 07:03:10.638286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.087 [2024-11-20 07:03:10.638428] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.087 [2024-11-20 07:03:10.638457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.346 #18 NEW cov: 12487 ft: 15053 corp: 16/370b lim: 35 exec/s: 18 rss: 74Mb L: 30/35 MS: 1 CopyPart- 00:07:06.346 [2024-11-20 07:03:10.687170] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000008e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.346 [2024-11-20 07:03:10.687198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.346 #19 NEW cov: 12487 ft: 15080 corp: 17/381b lim: 35 exec/s: 19 rss: 74Mb L: 11/35 MS: 1 ChangeByte- 00:07:06.346 [2024-11-20 07:03:10.758018] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000020 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.346 [2024-11-20 07:03:10.758043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.346 [2024-11-20 07:03:10.758177] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.346 [2024-11-20 07:03:10.758193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.346 [2024-11-20 07:03:10.758323] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.346 [2024-11-20 07:03:10.758340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.346 #20 NEW cov: 12487 ft: 15083 corp: 18/402b lim: 35 exec/s: 20 rss: 74Mb L: 21/35 MS: 1 ChangeBit- 00:07:06.346 [2024-11-20 07:03:10.808194] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.346 [2024-11-20 07:03:10.808228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.346 [2024-11-20 07:03:10.808371] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.346 [2024-11-20 07:03:10.808393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.346 #21 NEW cov: 12487 ft: 15205 corp: 19/424b lim: 35 exec/s: 21 rss: 74Mb L: 22/35 MS: 1 EraseBytes- 00:07:06.346 [2024-11-20 07:03:10.877807] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000008e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.346 [2024-11-20 07:03:10.877841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.346 #22 NEW cov: 12487 ft: 15220 corp: 20/433b lim: 35 exec/s: 22 rss: 74Mb L: 9/35 MS: 1 EraseBytes- 00:07:06.606 [2024-11-20 07:03:10.928548] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.606 [2024-11-20 07:03:10.928579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.606 [2024-11-20 07:03:10.928729] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.606 [2024-11-20 07:03:10.928752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.606 #23 NEW cov: 12487 ft: 15222 corp: 21/455b lim: 35 exec/s: 23 rss: 74Mb L: 22/35 MS: 1 ChangeByte- 00:07:06.606 [2024-11-20 07:03:10.998640] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.606 [2024-11-20 07:03:10.998670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.606 [2024-11-20 07:03:10.998809] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.606 [2024-11-20 07:03:10.998832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.606 [2024-11-20 07:03:10.998976] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.606 [2024-11-20 07:03:10.998993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.606 #24 NEW cov: 12487 ft: 15230 corp: 22/479b lim: 35 exec/s: 24 rss: 74Mb L: 24/35 MS: 1 InsertByte- 00:07:06.606 [2024-11-20 07:03:11.069020] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.606 [2024-11-20 07:03:11.069053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.606 [2024-11-20 07:03:11.069189] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.606 [2024-11-20 07:03:11.069213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.606 #30 NEW cov: 12487 ft: 15241 corp: 23/501b lim: 35 exec/s: 30 rss: 74Mb L: 22/35 MS: 1 ChangeBit- 00:07:06.606 [2024-11-20 07:03:11.118812] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.606 [2024-11-20 07:03:11.118847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.606 [2024-11-20 07:03:11.118978] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.606 [2024-11-20 07:03:11.119001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.606 #31 NEW cov: 12487 ft: 15320 corp: 24/521b lim: 35 exec/s: 31 rss: 74Mb L: 20/35 MS: 1 EraseBytes- 00:07:06.865 [2024-11-20 07:03:11.169547] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.865 [2024-11-20 07:03:11.169585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.865 [2024-11-20 07:03:11.169718] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.865 [2024-11-20 07:03:11.169741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.865 [2024-11-20 07:03:11.169862] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.865 [2024-11-20 07:03:11.169888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.865 [2024-11-20 07:03:11.170005] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.865 [2024-11-20 07:03:11.170023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.865 #32 NEW cov: 12487 ft: 15329 corp: 25/552b lim: 35 exec/s: 32 rss: 75Mb L: 31/35 MS: 1 InsertByte- 00:07:06.865 [2024-11-20 07:03:11.229812] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.865 [2024-11-20 07:03:11.229845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.865 [2024-11-20 07:03:11.229981] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.865 [2024-11-20 07:03:11.230007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.865 [2024-11-20 07:03:11.230128] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.865 [2024-11-20 07:03:11.230162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.865 [2024-11-20 07:03:11.230284] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.865 [2024-11-20 07:03:11.230309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.865 [2024-11-20 07:03:11.230433] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.865 [2024-11-20 07:03:11.230455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:06.865 #33 NEW cov: 12487 ft: 15354 corp: 26/587b lim: 35 exec/s: 33 rss: 75Mb L: 35/35 MS: 1 ChangeBit- 00:07:06.865 [2024-11-20 07:03:11.289289] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.865 [2024-11-20 07:03:11.289317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.865 [2024-11-20 07:03:11.289447] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.865 [2024-11-20 07:03:11.289462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.865 #34 NEW cov: 12487 ft: 15411 corp: 27/601b lim: 35 exec/s: 34 rss: 75Mb L: 14/35 MS: 1 EraseBytes- 00:07:06.865 [2024-11-20 07:03:11.329629] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000008e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.865 [2024-11-20 07:03:11.329655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.865 [2024-11-20 07:03:11.329784] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000007e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.865 [2024-11-20 07:03:11.329802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.865 [2024-11-20 07:03:11.329925] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.865 [2024-11-20 07:03:11.329951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.865 #35 NEW cov: 12487 ft: 15426 corp: 28/623b lim: 35 exec/s: 35 rss: 75Mb L: 22/35 MS: 1 InsertRepeatedBytes- 00:07:06.865 [2024-11-20 07:03:11.390286] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.865 [2024-11-20 07:03:11.390318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.865 [2024-11-20 07:03:11.390448] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.865 [2024-11-20 07:03:11.390473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.865 [2024-11-20 07:03:11.390601] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.865 [2024-11-20 07:03:11.390625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.865 #36 NEW cov: 12487 ft: 15440 corp: 29/657b lim: 35 exec/s: 36 rss: 75Mb L: 34/35 MS: 1 InsertRepeatedBytes- 00:07:07.124 [2024-11-20 07:03:11.430415] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.124 [2024-11-20 07:03:11.430449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.124 [2024-11-20 07:03:11.430579] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.124 [2024-11-20 07:03:11.430603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.124 [2024-11-20 07:03:11.430723] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.124 [2024-11-20 07:03:11.430747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.124 [2024-11-20 07:03:11.430869] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.124 [2024-11-20 07:03:11.430888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.124 [2024-11-20 07:03:11.431018] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.124 [2024-11-20 07:03:11.431042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:07.124 #37 NEW cov: 12487 ft: 15504 corp: 30/692b lim: 35 exec/s: 37 rss: 75Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:07.124 [2024-11-20 07:03:11.470018] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000008e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.124 [2024-11-20 07:03:11.470047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.124 [2024-11-20 07:03:11.470188] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000007e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.124 [2024-11-20 07:03:11.470205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.124 [2024-11-20 07:03:11.470341] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.124 [2024-11-20 07:03:11.470362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.124 #38 NEW cov: 12487 ft: 15526 corp: 31/714b lim: 35 exec/s: 19 rss: 75Mb L: 22/35 MS: 1 CrossOver- 00:07:07.124 #38 DONE cov: 12487 ft: 15526 corp: 31/714b lim: 35 exec/s: 19 rss: 75Mb 00:07:07.124 Done 38 runs in 2 second(s) 00:07:07.124 07:03:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:07:07.124 07:03:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:07.124 07:03:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:07.124 07:03:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:07:07.124 07:03:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:07:07.124 07:03:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:07.124 07:03:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:07.124 07:03:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:07.124 07:03:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:07:07.125 07:03:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:07.125 07:03:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:07.125 07:03:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:07:07.125 07:03:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:07:07.125 07:03:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:07.125 07:03:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:07:07.125 07:03:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:07.125 07:03:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:07.125 07:03:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:07.125 07:03:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:07:07.125 [2024-11-20 07:03:11.662540] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:07:07.125 [2024-11-20 07:03:11.662633] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3781796 ] 00:07:07.384 [2024-11-20 07:03:11.846151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.384 [2024-11-20 07:03:11.879192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.384 [2024-11-20 07:03:11.938531] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:07.642 [2024-11-20 07:03:11.954827] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:07:07.642 INFO: Running with entropic power schedule (0xFF, 100). 00:07:07.642 INFO: Seed: 3226028361 00:07:07.642 INFO: Loaded 1 modules (388174 inline 8-bit counters): 388174 [0x2c4d34c, 0x2cabf9a), 00:07:07.642 INFO: Loaded 1 PC tables (388174 PCs): 388174 [0x2cabfa0,0x3298480), 00:07:07.642 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:07.642 INFO: A corpus is not provided, starting from an empty corpus 00:07:07.642 #2 INITED exec/s: 0 rss: 66Mb 00:07:07.642 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:07.642 This may also happen if the target rejected all inputs we tried so far 00:07:07.642 [2024-11-20 07:03:11.999817] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.642 [2024-11-20 07:03:11.999850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.642 [2024-11-20 07:03:11.999899] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.642 [2024-11-20 07:03:11.999915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.642 [2024-11-20 07:03:11.999946] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.642 [2024-11-20 07:03:11.999962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.642 [2024-11-20 07:03:11.999992] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.642 [2024-11-20 07:03:12.000007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.642 [2024-11-20 07:03:12.000037] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.642 [2024-11-20 07:03:12.000056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:07.901 NEW_FUNC[1/715]: 0x451248 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:07:07.901 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:07.901 #5 NEW cov: 12198 ft: 12196 corp: 2/36b lim: 35 exec/s: 0 rss: 74Mb L: 35/35 MS: 3 InsertByte-EraseBytes-InsertRepeatedBytes- 00:07:07.901 [2024-11-20 07:03:12.340694] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.901 [2024-11-20 07:03:12.340731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.901 [2024-11-20 07:03:12.340781] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.901 [2024-11-20 07:03:12.340797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.901 [2024-11-20 07:03:12.340827] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.901 [2024-11-20 07:03:12.340843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.901 [2024-11-20 07:03:12.340873] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.901 [2024-11-20 07:03:12.340888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.901 [2024-11-20 07:03:12.340917] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.901 [2024-11-20 07:03:12.340932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:07.901 #6 NEW cov: 12321 ft: 12824 corp: 3/71b lim: 35 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:07.901 [2024-11-20 07:03:12.430713] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.901 [2024-11-20 07:03:12.430743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.901 [2024-11-20 07:03:12.430792] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.901 [2024-11-20 07:03:12.430807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.901 [2024-11-20 07:03:12.430838] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.901 [2024-11-20 07:03:12.430853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.160 #7 NEW cov: 12327 ft: 13516 corp: 4/95b lim: 35 exec/s: 0 rss: 74Mb L: 24/35 MS: 1 EraseBytes- 00:07:08.160 [2024-11-20 07:03:12.520878] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.160 [2024-11-20 07:03:12.520909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.160 NEW_FUNC[1/1]: 0x471258 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:08.160 #8 NEW cov: 12426 ft: 14097 corp: 5/112b lim: 35 exec/s: 0 rss: 74Mb L: 17/35 MS: 1 InsertRepeatedBytes- 00:07:08.160 [2024-11-20 07:03:12.581193] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.160 [2024-11-20 07:03:12.581227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.160 [2024-11-20 07:03:12.581275] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.160 [2024-11-20 07:03:12.581291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.160 [2024-11-20 07:03:12.581321] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.160 [2024-11-20 07:03:12.581336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.160 [2024-11-20 07:03:12.581366] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.160 [2024-11-20 07:03:12.581381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.160 [2024-11-20 07:03:12.581410] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.160 [2024-11-20 07:03:12.581425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:08.160 #9 NEW cov: 12426 ft: 14259 corp: 6/147b lim: 35 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:08.160 [2024-11-20 07:03:12.641270] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.160 [2024-11-20 07:03:12.641301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.160 [2024-11-20 07:03:12.641350] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.160 [2024-11-20 07:03:12.641365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.160 [2024-11-20 07:03:12.641396] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.160 [2024-11-20 07:03:12.641411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.160 [2024-11-20 07:03:12.641441] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.160 [2024-11-20 07:03:12.641456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.160 #10 NEW cov: 12426 ft: 14360 corp: 7/180b lim: 35 exec/s: 0 rss: 74Mb L: 33/35 MS: 1 EraseBytes- 00:07:08.160 [2024-11-20 07:03:12.691477] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.160 [2024-11-20 07:03:12.691508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.160 [2024-11-20 07:03:12.691542] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.160 [2024-11-20 07:03:12.691558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.160 [2024-11-20 07:03:12.691588] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.160 [2024-11-20 07:03:12.691611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.160 [2024-11-20 07:03:12.691642] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.160 [2024-11-20 07:03:12.691656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.160 [2024-11-20 07:03:12.691690] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.160 [2024-11-20 07:03:12.691705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:08.419 #11 NEW cov: 12426 ft: 14413 corp: 8/215b lim: 35 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 CopyPart- 00:07:08.419 [2024-11-20 07:03:12.772279] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.419 [2024-11-20 07:03:12.772307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.419 #12 NEW cov: 12426 ft: 14591 corp: 9/233b lim: 35 exec/s: 0 rss: 74Mb L: 18/35 MS: 1 CrossOver- 00:07:08.419 [2024-11-20 07:03:12.832447] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.419 [2024-11-20 07:03:12.832474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.419 #13 NEW cov: 12426 ft: 14720 corp: 10/250b lim: 35 exec/s: 0 rss: 74Mb L: 17/35 MS: 1 ShuffleBytes- 00:07:08.419 [2024-11-20 07:03:12.872550] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.419 [2024-11-20 07:03:12.872576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.419 NEW_FUNC[1/1]: 0x1c34f28 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:08.419 #14 NEW cov: 12449 ft: 14794 corp: 11/267b lim: 35 exec/s: 0 rss: 74Mb L: 17/35 MS: 1 ChangeByte- 00:07:08.419 [2024-11-20 07:03:12.932714] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.419 [2024-11-20 07:03:12.932740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.419 [2024-11-20 07:03:12.932801] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.419 [2024-11-20 07:03:12.932815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.677 #15 NEW cov: 12449 ft: 14963 corp: 12/284b lim: 35 exec/s: 0 rss: 74Mb L: 17/35 MS: 1 EraseBytes- 00:07:08.677 [2024-11-20 07:03:12.993013] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.677 [2024-11-20 07:03:12.993039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.677 [2024-11-20 07:03:12.993099] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.677 [2024-11-20 07:03:12.993113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.677 [2024-11-20 07:03:12.993175] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.677 [2024-11-20 07:03:12.993188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.677 #16 NEW cov: 12449 ft: 14980 corp: 13/311b lim: 35 exec/s: 16 rss: 74Mb L: 27/35 MS: 1 EraseBytes- 00:07:08.677 [2024-11-20 07:03:13.033363] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.677 [2024-11-20 07:03:13.033388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.677 [2024-11-20 07:03:13.033448] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.678 [2024-11-20 07:03:13.033465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.678 [2024-11-20 07:03:13.033525] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.678 [2024-11-20 07:03:13.033538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.678 [2024-11-20 07:03:13.033596] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.678 [2024-11-20 07:03:13.033615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.678 [2024-11-20 07:03:13.033676] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.678 [2024-11-20 07:03:13.033690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:08.678 #17 NEW cov: 12449 ft: 15012 corp: 14/346b lim: 35 exec/s: 17 rss: 74Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:08.678 [2024-11-20 07:03:13.073500] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.678 [2024-11-20 07:03:13.073526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.678 [2024-11-20 07:03:13.073585] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.678 [2024-11-20 07:03:13.073602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.678 [2024-11-20 07:03:13.073664] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.678 [2024-11-20 07:03:13.073678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.678 [2024-11-20 07:03:13.073737] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.678 [2024-11-20 07:03:13.073751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.678 [2024-11-20 07:03:13.073809] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.678 [2024-11-20 07:03:13.073822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:08.678 #18 NEW cov: 12449 ft: 15106 corp: 15/381b lim: 35 exec/s: 18 rss: 74Mb L: 35/35 MS: 1 CopyPart- 00:07:08.678 [2024-11-20 07:03:13.133386] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.678 [2024-11-20 07:03:13.133412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.678 [2024-11-20 07:03:13.133471] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.678 [2024-11-20 07:03:13.133485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.678 [2024-11-20 07:03:13.133547] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.678 [2024-11-20 07:03:13.133560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.678 #19 NEW cov: 12449 ft: 15131 corp: 16/406b lim: 35 exec/s: 19 rss: 74Mb L: 25/35 MS: 1 CopyPart- 00:07:08.678 [2024-11-20 07:03:13.193476] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.678 [2024-11-20 07:03:13.193501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.678 #20 NEW cov: 12449 ft: 15156 corp: 17/423b lim: 35 exec/s: 20 rss: 74Mb L: 17/35 MS: 1 ChangeBit- 00:07:08.937 [2024-11-20 07:03:13.233735] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.937 [2024-11-20 07:03:13.233763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.937 [2024-11-20 07:03:13.233828] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.937 [2024-11-20 07:03:13.233842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.937 [2024-11-20 07:03:13.233903] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.937 [2024-11-20 07:03:13.233916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.937 #21 NEW cov: 12449 ft: 15250 corp: 18/447b lim: 35 exec/s: 21 rss: 74Mb L: 24/35 MS: 1 CMP- DE: "\000\000\000\000\000\000\000H"- 00:07:08.937 [2024-11-20 07:03:13.273848] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.937 [2024-11-20 07:03:13.273873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.937 [2024-11-20 07:03:13.273949] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.937 [2024-11-20 07:03:13.273963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.937 [2024-11-20 07:03:13.274025] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.937 [2024-11-20 07:03:13.274038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.937 #22 NEW cov: 12449 ft: 15290 corp: 19/472b lim: 35 exec/s: 22 rss: 74Mb L: 25/35 MS: 1 ChangeBinInt- 00:07:08.937 [2024-11-20 07:03:13.333997] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.937 [2024-11-20 07:03:13.334022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.937 [2024-11-20 07:03:13.334085] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.937 [2024-11-20 07:03:13.334098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.937 [2024-11-20 07:03:13.334158] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.937 [2024-11-20 07:03:13.334172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.937 #23 NEW cov: 12449 ft: 15297 corp: 20/498b lim: 35 exec/s: 23 rss: 74Mb L: 26/35 MS: 1 InsertByte- 00:07:08.937 [2024-11-20 07:03:13.374324] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.937 [2024-11-20 07:03:13.374349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.937 [2024-11-20 07:03:13.374411] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.937 [2024-11-20 07:03:13.374428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.937 [2024-11-20 07:03:13.374491] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.937 [2024-11-20 07:03:13.374505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.937 [2024-11-20 07:03:13.374567] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.937 [2024-11-20 07:03:13.374581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.937 [2024-11-20 07:03:13.374643] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.937 [2024-11-20 07:03:13.374657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:08.937 #24 NEW cov: 12449 ft: 15329 corp: 21/533b lim: 35 exec/s: 24 rss: 74Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:08.937 [2024-11-20 07:03:13.414329] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.937 [2024-11-20 07:03:13.414354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.937 [2024-11-20 07:03:13.414413] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.937 [2024-11-20 07:03:13.414427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.937 [2024-11-20 07:03:13.414486] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.937 [2024-11-20 07:03:13.414499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.937 [2024-11-20 07:03:13.414557] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000025d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.937 [2024-11-20 07:03:13.414571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.937 #25 NEW cov: 12449 ft: 15389 corp: 22/565b lim: 35 exec/s: 25 rss: 74Mb L: 32/35 MS: 1 InsertRepeatedBytes- 00:07:08.938 [2024-11-20 07:03:13.474281] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.938 [2024-11-20 07:03:13.474305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.196 #26 NEW cov: 12449 ft: 15393 corp: 23/583b lim: 35 exec/s: 26 rss: 74Mb L: 18/35 MS: 1 ChangeBinInt- 00:07:09.196 [2024-11-20 07:03:13.534546] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.196 [2024-11-20 07:03:13.534571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.196 [2024-11-20 07:03:13.534634] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.196 [2024-11-20 07:03:13.534648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.196 [2024-11-20 07:03:13.534705] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.196 [2024-11-20 07:03:13.534718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.196 #27 NEW cov: 12449 ft: 15396 corp: 24/608b lim: 35 exec/s: 27 rss: 75Mb L: 25/35 MS: 1 InsertByte- 00:07:09.196 [2024-11-20 07:03:13.594714] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.196 [2024-11-20 07:03:13.594739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.196 [2024-11-20 07:03:13.594800] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.196 [2024-11-20 07:03:13.594813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.197 [2024-11-20 07:03:13.594876] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.197 [2024-11-20 07:03:13.594890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.197 #28 NEW cov: 12449 ft: 15466 corp: 25/635b lim: 35 exec/s: 28 rss: 75Mb L: 27/35 MS: 1 ChangeBit- 00:07:09.197 [2024-11-20 07:03:13.635071] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.197 [2024-11-20 07:03:13.635096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.197 [2024-11-20 07:03:13.635156] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.197 [2024-11-20 07:03:13.635169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.197 [2024-11-20 07:03:13.635228] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.197 [2024-11-20 07:03:13.635241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.197 [2024-11-20 07:03:13.635299] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.197 [2024-11-20 07:03:13.635313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.197 [2024-11-20 07:03:13.635371] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.197 [2024-11-20 07:03:13.635385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:09.197 #29 NEW cov: 12449 ft: 15490 corp: 26/670b lim: 35 exec/s: 29 rss: 75Mb L: 35/35 MS: 1 ChangeBit- 00:07:09.197 [2024-11-20 07:03:13.674832] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.197 [2024-11-20 07:03:13.674857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.197 #30 NEW cov: 12449 ft: 15507 corp: 27/687b lim: 35 exec/s: 30 rss: 75Mb L: 17/35 MS: 1 ChangeBinInt- 00:07:09.197 [2024-11-20 07:03:13.735150] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.197 [2024-11-20 07:03:13.735175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.197 [2024-11-20 07:03:13.735236] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.197 [2024-11-20 07:03:13.735250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.456 #31 NEW cov: 12449 ft: 15526 corp: 28/713b lim: 35 exec/s: 31 rss: 75Mb L: 26/35 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000H"- 00:07:09.456 [2024-11-20 07:03:13.795342] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.456 [2024-11-20 07:03:13.795371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.456 [2024-11-20 07:03:13.795432] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.456 [2024-11-20 07:03:13.795446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.456 [2024-11-20 07:03:13.795503] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.456 [2024-11-20 07:03:13.795517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.456 [2024-11-20 07:03:13.795591] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000025d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.456 [2024-11-20 07:03:13.795611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.456 #32 NEW cov: 12449 ft: 15546 corp: 29/745b lim: 35 exec/s: 32 rss: 75Mb L: 32/35 MS: 1 CopyPart- 00:07:09.456 [2024-11-20 07:03:13.855708] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.456 [2024-11-20 07:03:13.855734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.457 [2024-11-20 07:03:13.855794] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.457 [2024-11-20 07:03:13.855807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.457 [2024-11-20 07:03:13.855865] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.457 [2024-11-20 07:03:13.855879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.457 [2024-11-20 07:03:13.855938] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.457 [2024-11-20 07:03:13.855951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.457 [2024-11-20 07:03:13.856010] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.457 [2024-11-20 07:03:13.856023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:09.457 #33 NEW cov: 12449 ft: 15562 corp: 30/780b lim: 35 exec/s: 33 rss: 75Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:09.457 [2024-11-20 07:03:13.915856] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.457 [2024-11-20 07:03:13.915882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.457 [2024-11-20 07:03:13.915959] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.457 [2024-11-20 07:03:13.915973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.457 [2024-11-20 07:03:13.916030] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.457 [2024-11-20 07:03:13.916044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.457 [2024-11-20 07:03:13.916103] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.457 [2024-11-20 07:03:13.916120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.457 [2024-11-20 07:03:13.916177] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.457 [2024-11-20 07:03:13.916192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:09.457 #34 NEW cov: 12449 ft: 15576 corp: 31/815b lim: 35 exec/s: 34 rss: 75Mb L: 35/35 MS: 1 ChangeByte- 00:07:09.457 [2024-11-20 07:03:13.955643] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000200 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.457 [2024-11-20 07:03:13.955670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.457 #35 NEW cov: 12449 ft: 15580 corp: 32/832b lim: 35 exec/s: 35 rss: 75Mb L: 17/35 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000H"- 00:07:09.457 [2024-11-20 07:03:13.995884] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.457 [2024-11-20 07:03:13.995910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.457 [2024-11-20 07:03:13.995972] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.457 [2024-11-20 07:03:13.996003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.457 [2024-11-20 07:03:13.996065] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.457 [2024-11-20 07:03:13.996080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.716 #36 NEW cov: 12449 ft: 15600 corp: 33/853b lim: 35 exec/s: 18 rss: 75Mb L: 21/35 MS: 1 EraseBytes- 00:07:09.716 #36 DONE cov: 12449 ft: 15600 corp: 33/853b lim: 35 exec/s: 18 rss: 75Mb 00:07:09.716 ###### Recommended dictionary. ###### 00:07:09.716 "\000\000\000\000\000\000\000H" # Uses: 2 00:07:09.716 ###### End of recommended dictionary. ###### 00:07:09.716 Done 36 runs in 2 second(s) 00:07:09.716 07:03:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:07:09.716 07:03:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:09.716 07:03:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:09.716 07:03:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:07:09.716 07:03:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:07:09.716 07:03:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:09.716 07:03:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:09.716 07:03:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:09.716 07:03:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:07:09.716 07:03:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:09.716 07:03:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:09.716 07:03:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:07:09.716 07:03:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:07:09.716 07:03:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:09.716 07:03:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:07:09.716 07:03:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:09.716 07:03:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:09.716 07:03:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:09.716 07:03:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:07:09.716 [2024-11-20 07:03:14.189937] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:07:09.716 [2024-11-20 07:03:14.190030] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3782202 ] 00:07:09.975 [2024-11-20 07:03:14.378678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.975 [2024-11-20 07:03:14.415805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.975 [2024-11-20 07:03:14.475447] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:09.975 [2024-11-20 07:03:14.491769] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:07:09.975 INFO: Running with entropic power schedule (0xFF, 100). 00:07:09.975 INFO: Seed: 1467040280 00:07:09.975 INFO: Loaded 1 modules (388174 inline 8-bit counters): 388174 [0x2c4d34c, 0x2cabf9a), 00:07:09.975 INFO: Loaded 1 PC tables (388174 PCs): 388174 [0x2cabfa0,0x3298480), 00:07:09.975 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:09.975 INFO: A corpus is not provided, starting from an empty corpus 00:07:09.975 #2 INITED exec/s: 0 rss: 65Mb 00:07:09.975 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:09.975 This may also happen if the target rejected all inputs we tried so far 00:07:10.233 [2024-11-20 07:03:14.537160] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1229782938247303441 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.233 [2024-11-20 07:03:14.537191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.233 [2024-11-20 07:03:14.537266] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.233 [2024-11-20 07:03:14.537283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.492 NEW_FUNC[1/716]: 0x452708 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:07:10.492 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:10.492 #19 NEW cov: 12312 ft: 12311 corp: 2/46b lim: 105 exec/s: 0 rss: 73Mb L: 45/45 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:10.492 [2024-11-20 07:03:14.867754] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11863788342978323620 len:42149 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.492 [2024-11-20 07:03:14.867787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.492 #24 NEW cov: 12425 ft: 13277 corp: 3/74b lim: 105 exec/s: 0 rss: 73Mb L: 28/45 MS: 5 ShuffleBytes-CrossOver-CopyPart-InsertByte-InsertRepeatedBytes- 00:07:10.492 [2024-11-20 07:03:14.907886] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1229782938247303441 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.492 [2024-11-20 07:03:14.907914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.492 [2024-11-20 07:03:14.907951] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.492 [2024-11-20 07:03:14.907969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.492 #30 NEW cov: 12431 ft: 13541 corp: 4/119b lim: 105 exec/s: 0 rss: 73Mb L: 45/45 MS: 1 ShuffleBytes- 00:07:10.492 [2024-11-20 07:03:14.968056] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1229782938247303441 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.492 [2024-11-20 07:03:14.968085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.492 [2024-11-20 07:03:14.968121] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1229782941969608432 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.492 [2024-11-20 07:03:14.968136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.492 #31 NEW cov: 12516 ft: 13839 corp: 5/164b lim: 105 exec/s: 0 rss: 73Mb L: 45/45 MS: 1 ChangeBinInt- 00:07:10.492 [2024-11-20 07:03:15.028228] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1229885192828686609 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.492 [2024-11-20 07:03:15.028256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.492 [2024-11-20 07:03:15.028294] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.492 [2024-11-20 07:03:15.028310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.751 #32 NEW cov: 12516 ft: 13939 corp: 6/210b lim: 105 exec/s: 0 rss: 73Mb L: 46/46 MS: 1 InsertByte- 00:07:10.751 [2024-11-20 07:03:15.068331] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1229885192828686609 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.751 [2024-11-20 07:03:15.068358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.751 [2024-11-20 07:03:15.068408] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.751 [2024-11-20 07:03:15.068424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.751 #33 NEW cov: 12516 ft: 14004 corp: 7/256b lim: 105 exec/s: 0 rss: 73Mb L: 46/46 MS: 1 CopyPart- 00:07:10.751 [2024-11-20 07:03:15.128497] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1229885192828686609 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.751 [2024-11-20 07:03:15.128524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.751 [2024-11-20 07:03:15.128563] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1518013314399015185 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.751 [2024-11-20 07:03:15.128578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.751 #34 NEW cov: 12516 ft: 14103 corp: 8/302b lim: 105 exec/s: 0 rss: 73Mb L: 46/46 MS: 1 ChangeBit- 00:07:10.751 [2024-11-20 07:03:15.188582] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1229782938247303441 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.751 [2024-11-20 07:03:15.188613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.751 #37 NEW cov: 12516 ft: 14149 corp: 9/326b lim: 105 exec/s: 0 rss: 73Mb L: 24/46 MS: 3 ChangeBit-ShuffleBytes-CrossOver- 00:07:10.751 [2024-11-20 07:03:15.228681] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11863788342969935012 len:42149 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.751 [2024-11-20 07:03:15.228714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.751 #38 NEW cov: 12516 ft: 14187 corp: 10/354b lim: 105 exec/s: 0 rss: 73Mb L: 28/46 MS: 1 ChangeBit- 00:07:10.751 [2024-11-20 07:03:15.288927] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1229885192828686609 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.751 [2024-11-20 07:03:15.288956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.751 [2024-11-20 07:03:15.289006] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1518013314399015185 len:20754 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.751 [2024-11-20 07:03:15.289022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.009 #39 NEW cov: 12516 ft: 14223 corp: 11/400b lim: 105 exec/s: 0 rss: 73Mb L: 46/46 MS: 1 ChangeBit- 00:07:11.009 [2024-11-20 07:03:15.349114] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1229782938247303441 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.010 [2024-11-20 07:03:15.349141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.010 [2024-11-20 07:03:15.349194] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1157725343923044352 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.010 [2024-11-20 07:03:15.349210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.010 #40 NEW cov: 12516 ft: 14249 corp: 12/445b lim: 105 exec/s: 0 rss: 74Mb L: 45/46 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\020"- 00:07:11.010 [2024-11-20 07:03:15.409406] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11863788342978323620 len:42149 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.010 [2024-11-20 07:03:15.409432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.010 [2024-11-20 07:03:15.409478] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:61423 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.010 [2024-11-20 07:03:15.409494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.010 [2024-11-20 07:03:15.409548] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:1229782938247303441 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.010 [2024-11-20 07:03:15.409578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.010 NEW_FUNC[1/1]: 0x1c34f28 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:11.010 #41 NEW cov: 12539 ft: 14605 corp: 13/518b lim: 105 exec/s: 0 rss: 74Mb L: 73/73 MS: 1 CrossOver- 00:07:11.010 [2024-11-20 07:03:15.449410] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1229885192828686609 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.010 [2024-11-20 07:03:15.449439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.010 [2024-11-20 07:03:15.449475] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.010 [2024-11-20 07:03:15.449490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.010 #42 NEW cov: 12539 ft: 14628 corp: 14/564b lim: 105 exec/s: 0 rss: 74Mb L: 46/73 MS: 1 ChangeBinInt- 00:07:11.010 [2024-11-20 07:03:15.489382] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1229782938247303441 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.010 [2024-11-20 07:03:15.489413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.010 #43 NEW cov: 12539 ft: 14648 corp: 15/592b lim: 105 exec/s: 0 rss: 74Mb L: 28/73 MS: 1 EraseBytes- 00:07:11.010 [2024-11-20 07:03:15.529628] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1229782938247303441 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.010 [2024-11-20 07:03:15.529655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.010 [2024-11-20 07:03:15.529691] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.010 [2024-11-20 07:03:15.529707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.010 #44 NEW cov: 12539 ft: 14662 corp: 16/637b lim: 105 exec/s: 44 rss: 74Mb L: 45/73 MS: 1 ChangeBinInt- 00:07:11.269 [2024-11-20 07:03:15.569881] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11863788342978323620 len:42149 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.269 [2024-11-20 07:03:15.569908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.269 [2024-11-20 07:03:15.569946] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:61202 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.269 [2024-11-20 07:03:15.569961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.269 [2024-11-20 07:03:15.570013] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:1229782937962090769 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.269 [2024-11-20 07:03:15.570029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.269 #45 NEW cov: 12539 ft: 14717 corp: 17/710b lim: 105 exec/s: 45 rss: 74Mb L: 73/73 MS: 1 CrossOver- 00:07:11.269 [2024-11-20 07:03:15.629880] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1229782938247303441 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.269 [2024-11-20 07:03:15.629907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.269 [2024-11-20 07:03:15.629944] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1229782938247303664 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.269 [2024-11-20 07:03:15.629959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.269 #46 NEW cov: 12539 ft: 14725 corp: 18/755b lim: 105 exec/s: 46 rss: 74Mb L: 45/73 MS: 1 CrossOver- 00:07:11.269 [2024-11-20 07:03:15.669981] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1229885192828686609 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.269 [2024-11-20 07:03:15.670007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.269 [2024-11-20 07:03:15.670044] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.269 [2024-11-20 07:03:15.670060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.269 #47 NEW cov: 12539 ft: 14735 corp: 19/801b lim: 105 exec/s: 47 rss: 74Mb L: 46/73 MS: 1 ChangeBit- 00:07:11.269 [2024-11-20 07:03:15.710096] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1229782938247303441 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.269 [2024-11-20 07:03:15.710128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.269 [2024-11-20 07:03:15.710178] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1157725343925207040 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.269 [2024-11-20 07:03:15.710193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.269 #48 NEW cov: 12539 ft: 14761 corp: 20/846b lim: 105 exec/s: 48 rss: 74Mb L: 45/73 MS: 1 ChangeByte- 00:07:11.269 [2024-11-20 07:03:15.770575] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1229782938247303441 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.269 [2024-11-20 07:03:15.770608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.269 [2024-11-20 07:03:15.770660] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.269 [2024-11-20 07:03:15.770673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.269 [2024-11-20 07:03:15.770725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:1229782938247303441 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.269 [2024-11-20 07:03:15.770738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.269 [2024-11-20 07:03:15.770790] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:1229782938146640145 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.269 [2024-11-20 07:03:15.770804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.269 #49 NEW cov: 12539 ft: 15322 corp: 21/935b lim: 105 exec/s: 49 rss: 74Mb L: 89/89 MS: 1 CopyPart- 00:07:11.529 [2024-11-20 07:03:15.830443] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1229782938247303441 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.529 [2024-11-20 07:03:15.830471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.529 [2024-11-20 07:03:15.830508] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.529 [2024-11-20 07:03:15.830523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.529 #50 NEW cov: 12539 ft: 15344 corp: 22/980b lim: 105 exec/s: 50 rss: 74Mb L: 45/89 MS: 1 ShuffleBytes- 00:07:11.529 [2024-11-20 07:03:15.870583] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1229885192828686609 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.529 [2024-11-20 07:03:15.870615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.529 [2024-11-20 07:03:15.870656] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1229831316758925585 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.529 [2024-11-20 07:03:15.870671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.529 #51 NEW cov: 12539 ft: 15358 corp: 23/1027b lim: 105 exec/s: 51 rss: 74Mb L: 47/89 MS: 1 InsertByte- 00:07:11.529 [2024-11-20 07:03:15.910760] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1229885192828686609 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.529 [2024-11-20 07:03:15.910786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.529 [2024-11-20 07:03:15.910823] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1518013314399015185 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.529 [2024-11-20 07:03:15.910841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.529 #52 NEW cov: 12539 ft: 15395 corp: 24/1073b lim: 105 exec/s: 52 rss: 74Mb L: 46/89 MS: 1 ShuffleBytes- 00:07:11.529 [2024-11-20 07:03:15.950847] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1229885192828686609 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.529 [2024-11-20 07:03:15.950874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.529 [2024-11-20 07:03:15.950927] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1229782937960976401 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.529 [2024-11-20 07:03:15.950942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.529 #53 NEW cov: 12539 ft: 15405 corp: 25/1127b lim: 105 exec/s: 53 rss: 74Mb L: 54/89 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\020"- 00:07:11.529 [2024-11-20 07:03:16.010892] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1229885192828686609 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.529 [2024-11-20 07:03:16.010919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.529 #54 NEW cov: 12539 ft: 15428 corp: 26/1158b lim: 105 exec/s: 54 rss: 74Mb L: 31/89 MS: 1 EraseBytes- 00:07:11.529 [2024-11-20 07:03:16.071148] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1229885192828686609 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.529 [2024-11-20 07:03:16.071176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.529 [2024-11-20 07:03:16.071213] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.529 [2024-11-20 07:03:16.071228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.788 #55 NEW cov: 12539 ft: 15435 corp: 27/1202b lim: 105 exec/s: 55 rss: 74Mb L: 44/89 MS: 1 EraseBytes- 00:07:11.788 [2024-11-20 07:03:16.111302] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1229885192828686609 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.788 [2024-11-20 07:03:16.111329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.788 [2024-11-20 07:03:16.111366] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.788 [2024-11-20 07:03:16.111380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.788 #56 NEW cov: 12539 ft: 15479 corp: 28/1248b lim: 105 exec/s: 56 rss: 74Mb L: 46/89 MS: 1 ShuffleBytes- 00:07:11.788 [2024-11-20 07:03:16.151282] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1229885192828686609 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.788 [2024-11-20 07:03:16.151309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.788 #57 NEW cov: 12539 ft: 15520 corp: 29/1278b lim: 105 exec/s: 57 rss: 74Mb L: 30/89 MS: 1 EraseBytes- 00:07:11.788 [2024-11-20 07:03:16.191518] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1229783338182578449 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.788 [2024-11-20 07:03:16.191546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.788 [2024-11-20 07:03:16.191610] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.788 [2024-11-20 07:03:16.191626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.788 #62 NEW cov: 12539 ft: 15524 corp: 30/1325b lim: 105 exec/s: 62 rss: 74Mb L: 47/89 MS: 5 ChangeBit-CopyPart-ChangeBit-ChangeBit-CrossOver- 00:07:11.788 [2024-11-20 07:03:16.231518] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1229782938247303441 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.788 [2024-11-20 07:03:16.231545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.788 #63 NEW cov: 12539 ft: 15579 corp: 31/1354b lim: 105 exec/s: 63 rss: 74Mb L: 29/89 MS: 1 InsertByte- 00:07:11.788 [2024-11-20 07:03:16.291797] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1229885192828686609 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.788 [2024-11-20 07:03:16.291824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.788 [2024-11-20 07:03:16.291861] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1518013314399015185 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.788 [2024-11-20 07:03:16.291876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.789 #64 NEW cov: 12539 ft: 15581 corp: 32/1400b lim: 105 exec/s: 64 rss: 74Mb L: 46/89 MS: 1 ShuffleBytes- 00:07:11.789 [2024-11-20 07:03:16.331799] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1224979098931106065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.789 [2024-11-20 07:03:16.331827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.048 #67 NEW cov: 12539 ft: 15602 corp: 33/1427b lim: 105 exec/s: 67 rss: 74Mb L: 27/89 MS: 3 EraseBytes-ChangeByte-PersAutoDict- DE: "\000\000\000\000\000\000\000\020"- 00:07:12.048 [2024-11-20 07:03:16.371936] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1229885192828686609 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.048 [2024-11-20 07:03:16.371963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.048 #68 NEW cov: 12539 ft: 15624 corp: 34/1457b lim: 105 exec/s: 68 rss: 74Mb L: 30/89 MS: 1 ChangeBinInt- 00:07:12.048 [2024-11-20 07:03:16.432065] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1229782938247303441 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.048 [2024-11-20 07:03:16.432092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.048 #69 NEW cov: 12539 ft: 15635 corp: 35/1482b lim: 105 exec/s: 69 rss: 74Mb L: 25/89 MS: 1 EraseBytes- 00:07:12.048 [2024-11-20 07:03:16.472267] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1229885192828686609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.048 [2024-11-20 07:03:16.472294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.048 [2024-11-20 07:03:16.472330] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.048 [2024-11-20 07:03:16.472345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.048 #70 NEW cov: 12539 ft: 15650 corp: 36/1526b lim: 105 exec/s: 70 rss: 75Mb L: 44/89 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\020"- 00:07:12.048 [2024-11-20 07:03:16.532456] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1229782938247303441 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.048 [2024-11-20 07:03:16.532486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.048 [2024-11-20 07:03:16.532534] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.048 [2024-11-20 07:03:16.532549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.048 #71 NEW cov: 12539 ft: 15680 corp: 37/1571b lim: 105 exec/s: 35 rss: 75Mb L: 45/89 MS: 1 CopyPart- 00:07:12.048 #71 DONE cov: 12539 ft: 15680 corp: 37/1571b lim: 105 exec/s: 35 rss: 75Mb 00:07:12.048 ###### Recommended dictionary. ###### 00:07:12.048 "\000\000\000\000\000\000\000\020" # Uses: 3 00:07:12.048 ###### End of recommended dictionary. ###### 00:07:12.048 Done 71 runs in 2 second(s) 00:07:12.308 07:03:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:07:12.308 07:03:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:12.308 07:03:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:12.308 07:03:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:07:12.308 07:03:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:07:12.308 07:03:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:12.308 07:03:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:12.308 07:03:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:12.308 07:03:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:07:12.308 07:03:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:12.308 07:03:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:12.308 07:03:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:07:12.308 07:03:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:07:12.308 07:03:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:12.308 07:03:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:07:12.308 07:03:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:12.308 07:03:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:12.308 07:03:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:12.308 07:03:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:07:12.308 [2024-11-20 07:03:16.703269] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:07:12.308 [2024-11-20 07:03:16.703343] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3782620 ] 00:07:12.568 [2024-11-20 07:03:16.891311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.568 [2024-11-20 07:03:16.924700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.568 [2024-11-20 07:03:16.983797] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:12.568 [2024-11-20 07:03:17.000141] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:07:12.568 INFO: Running with entropic power schedule (0xFF, 100). 00:07:12.568 INFO: Seed: 3976048661 00:07:12.568 INFO: Loaded 1 modules (388174 inline 8-bit counters): 388174 [0x2c4d34c, 0x2cabf9a), 00:07:12.568 INFO: Loaded 1 PC tables (388174 PCs): 388174 [0x2cabfa0,0x3298480), 00:07:12.568 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:12.568 INFO: A corpus is not provided, starting from an empty corpus 00:07:12.568 #2 INITED exec/s: 0 rss: 66Mb 00:07:12.568 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:12.568 This may also happen if the target rejected all inputs we tried so far 00:07:12.568 [2024-11-20 07:03:17.055491] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13961653354678567361 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.568 [2024-11-20 07:03:17.055525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.568 [2024-11-20 07:03:17.055579] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13961653357748797889 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.568 [2024-11-20 07:03:17.055594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.826 NEW_FUNC[1/717]: 0x455a88 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:07:12.826 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:12.826 #14 NEW cov: 12333 ft: 12332 corp: 2/63b lim: 120 exec/s: 0 rss: 73Mb L: 62/62 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:12.826 [2024-11-20 07:03:17.376247] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13961653357748797889 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.826 [2024-11-20 07:03:17.376286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.086 #16 NEW cov: 12446 ft: 13643 corp: 3/98b lim: 120 exec/s: 0 rss: 73Mb L: 35/62 MS: 2 ChangeByte-CrossOver- 00:07:13.086 [2024-11-20 07:03:17.416622] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.086 [2024-11-20 07:03:17.416653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.086 [2024-11-20 07:03:17.416695] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.086 [2024-11-20 07:03:17.416711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.086 [2024-11-20 07:03:17.416767] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.086 [2024-11-20 07:03:17.416784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.086 #18 NEW cov: 12452 ft: 14318 corp: 4/183b lim: 120 exec/s: 0 rss: 73Mb L: 85/85 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:13.086 [2024-11-20 07:03:17.456376] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9693583158216328838 len:34439 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.086 [2024-11-20 07:03:17.456404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.086 #20 NEW cov: 12537 ft: 14603 corp: 5/207b lim: 120 exec/s: 0 rss: 73Mb L: 24/85 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:13.086 [2024-11-20 07:03:17.496488] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13961653357748797889 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.086 [2024-11-20 07:03:17.496517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.086 #21 NEW cov: 12537 ft: 14716 corp: 6/234b lim: 120 exec/s: 0 rss: 73Mb L: 27/85 MS: 1 EraseBytes- 00:07:13.086 [2024-11-20 07:03:17.557148] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.086 [2024-11-20 07:03:17.557177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.086 [2024-11-20 07:03:17.557223] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.086 [2024-11-20 07:03:17.557240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.086 [2024-11-20 07:03:17.557295] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.086 [2024-11-20 07:03:17.557311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.086 [2024-11-20 07:03:17.557364] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:3907518464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.086 [2024-11-20 07:03:17.557380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.086 #22 NEW cov: 12537 ft: 15181 corp: 7/334b lim: 120 exec/s: 0 rss: 73Mb L: 100/100 MS: 1 InsertRepeatedBytes- 00:07:13.086 [2024-11-20 07:03:17.616856] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13907115649500561857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.086 [2024-11-20 07:03:17.616885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.345 #24 NEW cov: 12537 ft: 15245 corp: 8/375b lim: 120 exec/s: 0 rss: 74Mb L: 41/100 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:13.345 [2024-11-20 07:03:17.677319] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.345 [2024-11-20 07:03:17.677347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.345 [2024-11-20 07:03:17.677398] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.345 [2024-11-20 07:03:17.677414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.345 [2024-11-20 07:03:17.677470] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.345 [2024-11-20 07:03:17.677502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.345 #25 NEW cov: 12537 ft: 15351 corp: 9/459b lim: 120 exec/s: 0 rss: 74Mb L: 84/100 MS: 1 EraseBytes- 00:07:13.345 [2024-11-20 07:03:17.717131] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9693583158216328959 len:34439 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.345 [2024-11-20 07:03:17.717158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.345 #26 NEW cov: 12537 ft: 15454 corp: 10/483b lim: 120 exec/s: 0 rss: 74Mb L: 24/100 MS: 1 ChangeByte- 00:07:13.345 [2024-11-20 07:03:17.777767] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.345 [2024-11-20 07:03:17.777794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.345 [2024-11-20 07:03:17.777842] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13961653357748797889 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.345 [2024-11-20 07:03:17.777873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.345 [2024-11-20 07:03:17.777932] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13961653357748797889 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.345 [2024-11-20 07:03:17.777947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.345 [2024-11-20 07:03:17.778003] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:3250651136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.345 [2024-11-20 07:03:17.778018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.345 #27 NEW cov: 12537 ft: 15509 corp: 11/594b lim: 120 exec/s: 0 rss: 74Mb L: 111/111 MS: 1 CrossOver- 00:07:13.345 [2024-11-20 07:03:17.837450] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9693583158216328838 len:34439 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.345 [2024-11-20 07:03:17.837478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.345 #28 NEW cov: 12537 ft: 15605 corp: 12/618b lim: 120 exec/s: 0 rss: 74Mb L: 24/111 MS: 1 ChangeByte- 00:07:13.345 [2024-11-20 07:03:17.878007] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.345 [2024-11-20 07:03:17.878035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.345 [2024-11-20 07:03:17.878107] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13961653357748797889 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.345 [2024-11-20 07:03:17.878124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.345 [2024-11-20 07:03:17.878183] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13961653357748797889 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.345 [2024-11-20 07:03:17.878197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.345 [2024-11-20 07:03:17.878254] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:3250651136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.345 [2024-11-20 07:03:17.878268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.605 #34 NEW cov: 12537 ft: 15635 corp: 13/729b lim: 120 exec/s: 0 rss: 74Mb L: 111/111 MS: 1 ChangeBit- 00:07:13.605 [2024-11-20 07:03:17.937869] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3027784439955456 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.605 [2024-11-20 07:03:17.937896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.605 [2024-11-20 07:03:17.937949] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3250651136 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.605 [2024-11-20 07:03:17.937966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.605 NEW_FUNC[1/1]: 0x1c34f28 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:13.605 #35 NEW cov: 12560 ft: 15730 corp: 14/790b lim: 120 exec/s: 0 rss: 74Mb L: 61/111 MS: 1 CrossOver- 00:07:13.605 [2024-11-20 07:03:17.977824] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9693583158216328838 len:34439 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.605 [2024-11-20 07:03:17.977852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.605 #36 NEW cov: 12560 ft: 15741 corp: 15/814b lim: 120 exec/s: 0 rss: 74Mb L: 24/111 MS: 1 ShuffleBytes- 00:07:13.605 [2024-11-20 07:03:18.018406] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.605 [2024-11-20 07:03:18.018433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.605 [2024-11-20 07:03:18.018490] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18410156126738956799 len:43458 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.605 [2024-11-20 07:03:18.018506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.605 [2024-11-20 07:03:18.018561] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13961653357748797889 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.605 [2024-11-20 07:03:18.018577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.605 [2024-11-20 07:03:18.018632] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13961653357748797889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.605 [2024-11-20 07:03:18.018648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.605 #37 NEW cov: 12560 ft: 15760 corp: 16/933b lim: 120 exec/s: 37 rss: 74Mb L: 119/119 MS: 1 CMP- DE: "\377\377~\003t_\244\251"- 00:07:13.605 [2024-11-20 07:03:18.058550] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.605 [2024-11-20 07:03:18.058577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.605 [2024-11-20 07:03:18.058630] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.605 [2024-11-20 07:03:18.058646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.605 [2024-11-20 07:03:18.058700] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.605 [2024-11-20 07:03:18.058714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.605 [2024-11-20 07:03:18.058768] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.605 [2024-11-20 07:03:18.058784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.605 #38 NEW cov: 12560 ft: 15780 corp: 17/1038b lim: 120 exec/s: 38 rss: 74Mb L: 105/119 MS: 1 InsertRepeatedBytes- 00:07:13.605 [2024-11-20 07:03:18.098153] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9693583158216328959 len:34439 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.605 [2024-11-20 07:03:18.098181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.605 #39 NEW cov: 12560 ft: 15793 corp: 18/1062b lim: 120 exec/s: 39 rss: 74Mb L: 24/119 MS: 1 ChangeBinInt- 00:07:13.605 [2024-11-20 07:03:18.158531] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3027784439955456 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.605 [2024-11-20 07:03:18.158560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.605 [2024-11-20 07:03:18.158604] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3250651136 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.605 [2024-11-20 07:03:18.158621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.864 #40 NEW cov: 12560 ft: 15858 corp: 19/1123b lim: 120 exec/s: 40 rss: 74Mb L: 61/119 MS: 1 ChangeBit- 00:07:13.864 [2024-11-20 07:03:18.218858] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.864 [2024-11-20 07:03:18.218886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.864 [2024-11-20 07:03:18.218933] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.864 [2024-11-20 07:03:18.218949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.864 [2024-11-20 07:03:18.219004] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.864 [2024-11-20 07:03:18.219035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.864 #41 NEW cov: 12560 ft: 15878 corp: 20/1207b lim: 120 exec/s: 41 rss: 74Mb L: 84/119 MS: 1 ChangeBit- 00:07:13.864 [2024-11-20 07:03:18.258637] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13961653357748797889 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.864 [2024-11-20 07:03:18.258665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.864 #42 NEW cov: 12560 ft: 15896 corp: 21/1242b lim: 120 exec/s: 42 rss: 74Mb L: 35/119 MS: 1 CrossOver- 00:07:13.864 [2024-11-20 07:03:18.298794] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13907115649500561857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.864 [2024-11-20 07:03:18.298822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.864 #43 NEW cov: 12560 ft: 15933 corp: 22/1283b lim: 120 exec/s: 43 rss: 74Mb L: 41/119 MS: 1 ChangeBit- 00:07:13.864 [2024-11-20 07:03:18.359130] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13961653354678567361 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.864 [2024-11-20 07:03:18.359158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.864 [2024-11-20 07:03:18.359197] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13961653357748797889 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.864 [2024-11-20 07:03:18.359213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.864 #44 NEW cov: 12560 ft: 15946 corp: 23/1345b lim: 120 exec/s: 44 rss: 74Mb L: 62/119 MS: 1 CopyPart- 00:07:13.864 [2024-11-20 07:03:18.399037] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9693583158216328838 len:34439 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.864 [2024-11-20 07:03:18.399065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.123 #45 NEW cov: 12560 ft: 15956 corp: 24/1377b lim: 120 exec/s: 45 rss: 74Mb L: 32/119 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:07:14.123 [2024-11-20 07:03:18.439641] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.123 [2024-11-20 07:03:18.439668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.123 [2024-11-20 07:03:18.439727] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18410156126738956799 len:43458 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.123 [2024-11-20 07:03:18.439741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.123 [2024-11-20 07:03:18.439797] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13961653357748797889 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.123 [2024-11-20 07:03:18.439816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.123 [2024-11-20 07:03:18.439873] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.123 [2024-11-20 07:03:18.439889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.123 #46 NEW cov: 12560 ft: 15975 corp: 25/1496b lim: 120 exec/s: 46 rss: 74Mb L: 119/119 MS: 1 CrossOver- 00:07:14.123 [2024-11-20 07:03:18.499856] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13961653354678567361 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.123 [2024-11-20 07:03:18.499885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.123 [2024-11-20 07:03:18.499935] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.123 [2024-11-20 07:03:18.499951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.123 [2024-11-20 07:03:18.500008] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.123 [2024-11-20 07:03:18.500041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.123 [2024-11-20 07:03:18.500099] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13961653357748797889 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.123 [2024-11-20 07:03:18.500114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.123 #47 NEW cov: 12560 ft: 15980 corp: 26/1605b lim: 120 exec/s: 47 rss: 74Mb L: 109/119 MS: 1 InsertRepeatedBytes- 00:07:14.123 [2024-11-20 07:03:18.539463] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9693583158216328838 len:34439 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.123 [2024-11-20 07:03:18.539491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.123 #48 NEW cov: 12560 ft: 15991 corp: 27/1629b lim: 120 exec/s: 48 rss: 75Mb L: 24/119 MS: 1 CrossOver- 00:07:14.123 [2024-11-20 07:03:18.599633] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9693583158216328838 len:34439 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.123 [2024-11-20 07:03:18.599661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.123 #49 NEW cov: 12560 ft: 16017 corp: 28/1661b lim: 120 exec/s: 49 rss: 75Mb L: 32/119 MS: 1 ChangeBinInt- 00:07:14.123 [2024-11-20 07:03:18.659940] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3027784439955456 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.123 [2024-11-20 07:03:18.659969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.123 [2024-11-20 07:03:18.660014] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:212208994811904 len:194 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.123 [2024-11-20 07:03:18.660030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.382 #50 NEW cov: 12560 ft: 16057 corp: 29/1722b lim: 120 exec/s: 50 rss: 75Mb L: 61/119 MS: 1 CopyPart- 00:07:14.382 [2024-11-20 07:03:18.699884] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13961653357748797889 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.382 [2024-11-20 07:03:18.699913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.382 #51 NEW cov: 12560 ft: 16064 corp: 30/1757b lim: 120 exec/s: 51 rss: 75Mb L: 35/119 MS: 1 ChangeBit- 00:07:14.382 [2024-11-20 07:03:18.740015] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9693583158216328838 len:34439 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.382 [2024-11-20 07:03:18.740043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.382 #57 NEW cov: 12560 ft: 16081 corp: 31/1790b lim: 120 exec/s: 57 rss: 75Mb L: 33/119 MS: 1 InsertByte- 00:07:14.382 [2024-11-20 07:03:18.800847] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3027784439955456 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.382 [2024-11-20 07:03:18.800876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.382 [2024-11-20 07:03:18.800935] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:212208994811904 len:194 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.382 [2024-11-20 07:03:18.800951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.382 [2024-11-20 07:03:18.801022] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.382 [2024-11-20 07:03:18.801039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.382 [2024-11-20 07:03:18.801096] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.382 [2024-11-20 07:03:18.801112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.382 [2024-11-20 07:03:18.801170] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:213034672848896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.382 [2024-11-20 07:03:18.801185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:14.382 #58 NEW cov: 12560 ft: 16138 corp: 32/1910b lim: 120 exec/s: 58 rss: 75Mb L: 120/120 MS: 1 InsertRepeatedBytes- 00:07:14.382 [2024-11-20 07:03:18.860528] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3027784439955456 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.382 [2024-11-20 07:03:18.860556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.382 [2024-11-20 07:03:18.860593] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3250651136 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.382 [2024-11-20 07:03:18.860614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.382 #59 NEW cov: 12560 ft: 16168 corp: 33/1971b lim: 120 exec/s: 59 rss: 75Mb L: 61/120 MS: 1 CMP- DE: "\001\214\330\010uLF4"- 00:07:14.382 [2024-11-20 07:03:18.900620] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3027784439955456 len:49410 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.382 [2024-11-20 07:03:18.900648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.382 [2024-11-20 07:03:18.900690] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3250651136 len:49602 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.382 [2024-11-20 07:03:18.900706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.641 #60 NEW cov: 12560 ft: 16174 corp: 34/2032b lim: 120 exec/s: 60 rss: 75Mb L: 61/120 MS: 1 CMP- DE: "\001\000\000\000\000\000\000H"- 00:07:14.641 [2024-11-20 07:03:18.960614] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9693583158216328838 len:62597 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.641 [2024-11-20 07:03:18.960642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.641 #61 NEW cov: 12560 ft: 16179 corp: 35/2064b lim: 120 exec/s: 61 rss: 75Mb L: 32/120 MS: 1 CMP- DE: "%\203\364\204\010\330\214\000"- 00:07:14.641 [2024-11-20 07:03:19.020783] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9693583158216328838 len:34439 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.641 [2024-11-20 07:03:19.020811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.641 #63 NEW cov: 12560 ft: 16215 corp: 36/2088b lim: 120 exec/s: 31 rss: 75Mb L: 24/120 MS: 2 EraseBytes-InsertByte- 00:07:14.641 #63 DONE cov: 12560 ft: 16215 corp: 36/2088b lim: 120 exec/s: 31 rss: 75Mb 00:07:14.641 ###### Recommended dictionary. ###### 00:07:14.641 "\377\377~\003t_\244\251" # Uses: 0 00:07:14.641 "\000\000\000\000\000\000\000\000" # Uses: 0 00:07:14.641 "\001\214\330\010uLF4" # Uses: 0 00:07:14.641 "\001\000\000\000\000\000\000H" # Uses: 0 00:07:14.641 "%\203\364\204\010\330\214\000" # Uses: 0 00:07:14.641 ###### End of recommended dictionary. ###### 00:07:14.641 Done 63 runs in 2 second(s) 00:07:14.641 07:03:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:07:14.641 07:03:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:14.641 07:03:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:14.641 07:03:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:07:14.641 07:03:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:07:14.641 07:03:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:14.641 07:03:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:14.641 07:03:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:14.641 07:03:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:07:14.641 07:03:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:14.641 07:03:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:14.641 07:03:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:07:14.641 07:03:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:07:14.641 07:03:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:14.641 07:03:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:07:14.641 07:03:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:14.641 07:03:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:14.641 07:03:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:14.641 07:03:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:07:14.900 [2024-11-20 07:03:19.196624] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:07:14.900 [2024-11-20 07:03:19.196694] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3783149 ] 00:07:14.900 [2024-11-20 07:03:19.381659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.900 [2024-11-20 07:03:19.414472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.158 [2024-11-20 07:03:19.473921] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:15.158 [2024-11-20 07:03:19.490264] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:07:15.158 INFO: Running with entropic power schedule (0xFF, 100). 00:07:15.158 INFO: Seed: 2170066414 00:07:15.158 INFO: Loaded 1 modules (388174 inline 8-bit counters): 388174 [0x2c4d34c, 0x2cabf9a), 00:07:15.158 INFO: Loaded 1 PC tables (388174 PCs): 388174 [0x2cabfa0,0x3298480), 00:07:15.158 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:15.158 INFO: A corpus is not provided, starting from an empty corpus 00:07:15.158 #2 INITED exec/s: 0 rss: 65Mb 00:07:15.158 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:15.158 This may also happen if the target rejected all inputs we tried so far 00:07:15.158 [2024-11-20 07:03:19.535790] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:15.158 [2024-11-20 07:03:19.535820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.158 [2024-11-20 07:03:19.535855] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:15.158 [2024-11-20 07:03:19.535869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.158 [2024-11-20 07:03:19.535923] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:15.158 [2024-11-20 07:03:19.535938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.158 [2024-11-20 07:03:19.535989] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:15.158 [2024-11-20 07:03:19.536003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.417 NEW_FUNC[1/715]: 0x459378 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:07:15.417 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:15.417 #26 NEW cov: 12276 ft: 12272 corp: 2/82b lim: 100 exec/s: 0 rss: 73Mb L: 81/81 MS: 4 ShuffleBytes-ShuffleBytes-ShuffleBytes-InsertRepeatedBytes- 00:07:15.417 [2024-11-20 07:03:19.856740] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:15.417 [2024-11-20 07:03:19.856772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.417 [2024-11-20 07:03:19.856822] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:15.417 [2024-11-20 07:03:19.856838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.417 [2024-11-20 07:03:19.856894] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:15.417 [2024-11-20 07:03:19.856908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.417 [2024-11-20 07:03:19.856964] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:15.417 [2024-11-20 07:03:19.856979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.417 #27 NEW cov: 12389 ft: 12853 corp: 3/163b lim: 100 exec/s: 0 rss: 73Mb L: 81/81 MS: 1 ChangeByte- 00:07:15.417 [2024-11-20 07:03:19.916850] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:15.417 [2024-11-20 07:03:19.916878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.417 [2024-11-20 07:03:19.916918] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:15.417 [2024-11-20 07:03:19.916933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.417 [2024-11-20 07:03:19.916990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:15.417 [2024-11-20 07:03:19.917006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.417 [2024-11-20 07:03:19.917061] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:15.417 [2024-11-20 07:03:19.917076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.417 #28 NEW cov: 12395 ft: 13179 corp: 4/244b lim: 100 exec/s: 0 rss: 73Mb L: 81/81 MS: 1 ChangeByte- 00:07:15.677 [2024-11-20 07:03:19.976986] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:15.677 [2024-11-20 07:03:19.977017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.677 [2024-11-20 07:03:19.977066] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:15.677 [2024-11-20 07:03:19.977081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.677 [2024-11-20 07:03:19.977135] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:15.677 [2024-11-20 07:03:19.977151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.677 [2024-11-20 07:03:19.977219] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:15.677 [2024-11-20 07:03:19.977232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.677 #29 NEW cov: 12480 ft: 13560 corp: 5/333b lim: 100 exec/s: 0 rss: 73Mb L: 89/89 MS: 1 CopyPart- 00:07:15.677 [2024-11-20 07:03:20.037095] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:15.677 [2024-11-20 07:03:20.037122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.677 [2024-11-20 07:03:20.037172] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:15.677 [2024-11-20 07:03:20.037187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.677 [2024-11-20 07:03:20.037247] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:15.677 [2024-11-20 07:03:20.037261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.677 [2024-11-20 07:03:20.037321] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:15.677 [2024-11-20 07:03:20.037336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.677 #30 NEW cov: 12480 ft: 13742 corp: 6/415b lim: 100 exec/s: 0 rss: 73Mb L: 82/89 MS: 1 InsertByte- 00:07:15.677 [2024-11-20 07:03:20.076996] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:15.677 [2024-11-20 07:03:20.077025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.677 [2024-11-20 07:03:20.077064] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:15.677 [2024-11-20 07:03:20.077079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.677 #31 NEW cov: 12480 ft: 14229 corp: 7/467b lim: 100 exec/s: 0 rss: 73Mb L: 52/89 MS: 1 EraseBytes- 00:07:15.677 [2024-11-20 07:03:20.137195] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:15.677 [2024-11-20 07:03:20.137224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.677 [2024-11-20 07:03:20.137277] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:15.677 [2024-11-20 07:03:20.137292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.677 #32 NEW cov: 12480 ft: 14312 corp: 8/514b lim: 100 exec/s: 0 rss: 73Mb L: 47/89 MS: 1 EraseBytes- 00:07:15.677 [2024-11-20 07:03:20.177529] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:15.677 [2024-11-20 07:03:20.177556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.677 [2024-11-20 07:03:20.177624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:15.677 [2024-11-20 07:03:20.177640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.677 [2024-11-20 07:03:20.177695] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:15.677 [2024-11-20 07:03:20.177710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.677 [2024-11-20 07:03:20.177766] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:15.677 [2024-11-20 07:03:20.177781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.677 #33 NEW cov: 12480 ft: 14353 corp: 9/604b lim: 100 exec/s: 0 rss: 73Mb L: 90/90 MS: 1 InsertRepeatedBytes- 00:07:15.936 [2024-11-20 07:03:20.237758] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:15.936 [2024-11-20 07:03:20.237785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.936 [2024-11-20 07:03:20.237857] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:15.936 [2024-11-20 07:03:20.237872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.936 [2024-11-20 07:03:20.237928] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:15.936 [2024-11-20 07:03:20.237943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.936 [2024-11-20 07:03:20.237999] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:15.936 [2024-11-20 07:03:20.238012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.936 #34 NEW cov: 12480 ft: 14410 corp: 10/694b lim: 100 exec/s: 0 rss: 73Mb L: 90/90 MS: 1 CrossOver- 00:07:15.936 [2024-11-20 07:03:20.297901] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:15.936 [2024-11-20 07:03:20.297929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.936 [2024-11-20 07:03:20.297977] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:15.936 [2024-11-20 07:03:20.297991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.936 [2024-11-20 07:03:20.298046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:15.936 [2024-11-20 07:03:20.298061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.936 [2024-11-20 07:03:20.298121] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:15.936 [2024-11-20 07:03:20.298135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.936 #35 NEW cov: 12480 ft: 14464 corp: 11/776b lim: 100 exec/s: 0 rss: 74Mb L: 82/90 MS: 1 ChangeByte- 00:07:15.936 [2024-11-20 07:03:20.357810] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:15.936 [2024-11-20 07:03:20.357836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.936 [2024-11-20 07:03:20.357890] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:15.936 [2024-11-20 07:03:20.357905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.936 #36 NEW cov: 12480 ft: 14481 corp: 12/824b lim: 100 exec/s: 0 rss: 74Mb L: 48/90 MS: 1 EraseBytes- 00:07:15.936 [2024-11-20 07:03:20.398033] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:15.936 [2024-11-20 07:03:20.398061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.936 [2024-11-20 07:03:20.398114] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:15.936 [2024-11-20 07:03:20.398129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.936 [2024-11-20 07:03:20.398188] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:15.936 [2024-11-20 07:03:20.398202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.936 NEW_FUNC[1/1]: 0x1c34f28 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:15.936 #37 NEW cov: 12503 ft: 14712 corp: 13/889b lim: 100 exec/s: 0 rss: 74Mb L: 65/90 MS: 1 EraseBytes- 00:07:15.936 [2024-11-20 07:03:20.438160] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:15.936 [2024-11-20 07:03:20.438188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.936 [2024-11-20 07:03:20.438252] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:15.936 [2024-11-20 07:03:20.438268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.936 [2024-11-20 07:03:20.438326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:15.936 [2024-11-20 07:03:20.438341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.936 #38 NEW cov: 12503 ft: 14750 corp: 14/949b lim: 100 exec/s: 0 rss: 74Mb L: 60/90 MS: 1 EraseBytes- 00:07:15.936 [2024-11-20 07:03:20.478289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:15.936 [2024-11-20 07:03:20.478317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.936 [2024-11-20 07:03:20.478353] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:15.936 [2024-11-20 07:03:20.478368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.936 [2024-11-20 07:03:20.478425] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:15.936 [2024-11-20 07:03:20.478441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.195 #39 NEW cov: 12503 ft: 14765 corp: 15/1015b lim: 100 exec/s: 0 rss: 74Mb L: 66/90 MS: 1 InsertByte- 00:07:16.195 [2024-11-20 07:03:20.538585] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.195 [2024-11-20 07:03:20.538618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.195 [2024-11-20 07:03:20.538681] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.195 [2024-11-20 07:03:20.538696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.195 [2024-11-20 07:03:20.538751] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:16.195 [2024-11-20 07:03:20.538767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.195 [2024-11-20 07:03:20.538822] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:16.195 [2024-11-20 07:03:20.538837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.195 #40 NEW cov: 12503 ft: 14804 corp: 16/1105b lim: 100 exec/s: 40 rss: 74Mb L: 90/90 MS: 1 ChangeBinInt- 00:07:16.195 [2024-11-20 07:03:20.578735] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.195 [2024-11-20 07:03:20.578762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.195 [2024-11-20 07:03:20.578828] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.195 [2024-11-20 07:03:20.578843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.195 [2024-11-20 07:03:20.578900] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:16.195 [2024-11-20 07:03:20.578914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.195 [2024-11-20 07:03:20.578973] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:16.195 [2024-11-20 07:03:20.578988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.195 #41 NEW cov: 12503 ft: 14841 corp: 17/1194b lim: 100 exec/s: 41 rss: 74Mb L: 89/90 MS: 1 ChangeBit- 00:07:16.195 [2024-11-20 07:03:20.618823] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.195 [2024-11-20 07:03:20.618850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.195 [2024-11-20 07:03:20.618905] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.195 [2024-11-20 07:03:20.618920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.195 [2024-11-20 07:03:20.618978] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:16.195 [2024-11-20 07:03:20.618992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.195 [2024-11-20 07:03:20.619051] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:16.195 [2024-11-20 07:03:20.619066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.195 #42 NEW cov: 12503 ft: 14894 corp: 18/1284b lim: 100 exec/s: 42 rss: 74Mb L: 90/90 MS: 1 CrossOver- 00:07:16.195 [2024-11-20 07:03:20.658918] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.195 [2024-11-20 07:03:20.658945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.195 [2024-11-20 07:03:20.659013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.195 [2024-11-20 07:03:20.659035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.195 [2024-11-20 07:03:20.659091] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:16.195 [2024-11-20 07:03:20.659106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.195 [2024-11-20 07:03:20.659160] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:16.195 [2024-11-20 07:03:20.659175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.195 #43 NEW cov: 12503 ft: 14901 corp: 19/1374b lim: 100 exec/s: 43 rss: 74Mb L: 90/90 MS: 1 ChangeBinInt- 00:07:16.195 [2024-11-20 07:03:20.719125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.195 [2024-11-20 07:03:20.719153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.195 [2024-11-20 07:03:20.719203] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.195 [2024-11-20 07:03:20.719218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.195 [2024-11-20 07:03:20.719276] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:16.195 [2024-11-20 07:03:20.719292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.196 [2024-11-20 07:03:20.719350] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:16.196 [2024-11-20 07:03:20.719364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.196 #44 NEW cov: 12503 ft: 14943 corp: 20/1455b lim: 100 exec/s: 44 rss: 74Mb L: 81/90 MS: 1 CrossOver- 00:07:16.454 [2024-11-20 07:03:20.758949] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.455 [2024-11-20 07:03:20.758975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.455 [2024-11-20 07:03:20.759022] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.455 [2024-11-20 07:03:20.759038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.455 #45 NEW cov: 12503 ft: 14959 corp: 21/1507b lim: 100 exec/s: 45 rss: 74Mb L: 52/90 MS: 1 ChangeBit- 00:07:16.455 [2024-11-20 07:03:20.799064] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.455 [2024-11-20 07:03:20.799092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.455 [2024-11-20 07:03:20.799136] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.455 [2024-11-20 07:03:20.799151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.455 #46 NEW cov: 12503 ft: 14978 corp: 22/1561b lim: 100 exec/s: 46 rss: 74Mb L: 54/90 MS: 1 CopyPart- 00:07:16.455 [2024-11-20 07:03:20.859544] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.455 [2024-11-20 07:03:20.859572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.455 [2024-11-20 07:03:20.859623] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.455 [2024-11-20 07:03:20.859638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.455 [2024-11-20 07:03:20.859695] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:16.455 [2024-11-20 07:03:20.859710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.455 [2024-11-20 07:03:20.859765] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:16.455 [2024-11-20 07:03:20.859779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.455 #47 NEW cov: 12503 ft: 15006 corp: 23/1642b lim: 100 exec/s: 47 rss: 74Mb L: 81/90 MS: 1 ShuffleBytes- 00:07:16.455 [2024-11-20 07:03:20.899507] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.455 [2024-11-20 07:03:20.899533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.455 [2024-11-20 07:03:20.899585] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.455 [2024-11-20 07:03:20.899604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.455 [2024-11-20 07:03:20.899659] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:16.455 [2024-11-20 07:03:20.899674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.455 #49 NEW cov: 12503 ft: 15038 corp: 24/1721b lim: 100 exec/s: 49 rss: 74Mb L: 79/90 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:16.455 [2024-11-20 07:03:20.939736] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.455 [2024-11-20 07:03:20.939763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.455 [2024-11-20 07:03:20.939832] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.455 [2024-11-20 07:03:20.939849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.455 [2024-11-20 07:03:20.939906] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:16.455 [2024-11-20 07:03:20.939921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.455 [2024-11-20 07:03:20.939979] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:16.455 [2024-11-20 07:03:20.939992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.455 #50 NEW cov: 12503 ft: 15050 corp: 25/1811b lim: 100 exec/s: 50 rss: 74Mb L: 90/90 MS: 1 ShuffleBytes- 00:07:16.455 [2024-11-20 07:03:20.999921] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.455 [2024-11-20 07:03:20.999949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.455 [2024-11-20 07:03:21.000000] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.455 [2024-11-20 07:03:21.000015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.455 [2024-11-20 07:03:21.000072] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:16.455 [2024-11-20 07:03:21.000087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.455 [2024-11-20 07:03:21.000142] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:16.455 [2024-11-20 07:03:21.000157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.714 #51 NEW cov: 12503 ft: 15091 corp: 26/1901b lim: 100 exec/s: 51 rss: 74Mb L: 90/90 MS: 1 ShuffleBytes- 00:07:16.714 [2024-11-20 07:03:21.040019] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.714 [2024-11-20 07:03:21.040046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.714 [2024-11-20 07:03:21.040101] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.714 [2024-11-20 07:03:21.040113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.714 [2024-11-20 07:03:21.040170] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:16.714 [2024-11-20 07:03:21.040185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.714 [2024-11-20 07:03:21.040239] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:16.714 [2024-11-20 07:03:21.040254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.714 #52 NEW cov: 12503 ft: 15100 corp: 27/1990b lim: 100 exec/s: 52 rss: 74Mb L: 89/90 MS: 1 InsertRepeatedBytes- 00:07:16.714 [2024-11-20 07:03:21.080094] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.714 [2024-11-20 07:03:21.080120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.714 [2024-11-20 07:03:21.080170] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.714 [2024-11-20 07:03:21.080185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.714 [2024-11-20 07:03:21.080242] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:16.714 [2024-11-20 07:03:21.080257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.714 [2024-11-20 07:03:21.080314] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:16.714 [2024-11-20 07:03:21.080328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.714 #53 NEW cov: 12503 ft: 15120 corp: 28/2079b lim: 100 exec/s: 53 rss: 74Mb L: 89/90 MS: 1 ChangeBit- 00:07:16.714 [2024-11-20 07:03:21.140266] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.714 [2024-11-20 07:03:21.140292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.714 [2024-11-20 07:03:21.140345] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.714 [2024-11-20 07:03:21.140360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.714 [2024-11-20 07:03:21.140416] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:16.714 [2024-11-20 07:03:21.140431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.714 [2024-11-20 07:03:21.140489] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:16.714 [2024-11-20 07:03:21.140504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.714 #54 NEW cov: 12503 ft: 15168 corp: 29/2170b lim: 100 exec/s: 54 rss: 74Mb L: 91/91 MS: 1 CopyPart- 00:07:16.714 [2024-11-20 07:03:21.200326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.714 [2024-11-20 07:03:21.200354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.714 [2024-11-20 07:03:21.200394] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.714 [2024-11-20 07:03:21.200410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.714 [2024-11-20 07:03:21.200468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:16.714 [2024-11-20 07:03:21.200499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.714 #55 NEW cov: 12503 ft: 15213 corp: 30/2246b lim: 100 exec/s: 55 rss: 74Mb L: 76/91 MS: 1 CrossOver- 00:07:16.714 [2024-11-20 07:03:21.240574] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.714 [2024-11-20 07:03:21.240607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.714 [2024-11-20 07:03:21.240655] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.714 [2024-11-20 07:03:21.240669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.714 [2024-11-20 07:03:21.240725] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:16.714 [2024-11-20 07:03:21.240739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.714 [2024-11-20 07:03:21.240796] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:16.714 [2024-11-20 07:03:21.240810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.714 #56 NEW cov: 12503 ft: 15217 corp: 31/2335b lim: 100 exec/s: 56 rss: 74Mb L: 89/91 MS: 1 ChangeByte- 00:07:16.973 [2024-11-20 07:03:21.280709] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.973 [2024-11-20 07:03:21.280736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.973 [2024-11-20 07:03:21.280800] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.973 [2024-11-20 07:03:21.280816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.973 [2024-11-20 07:03:21.280873] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:16.973 [2024-11-20 07:03:21.280888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.973 #58 NEW cov: 12503 ft: 15233 corp: 32/2414b lim: 100 exec/s: 58 rss: 74Mb L: 79/91 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:16.973 [2024-11-20 07:03:21.320800] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.973 [2024-11-20 07:03:21.320826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.973 [2024-11-20 07:03:21.320879] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.973 [2024-11-20 07:03:21.320894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.973 [2024-11-20 07:03:21.320948] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:16.973 [2024-11-20 07:03:21.320963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.973 [2024-11-20 07:03:21.321018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:16.973 [2024-11-20 07:03:21.321031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.973 #59 NEW cov: 12503 ft: 15240 corp: 33/2504b lim: 100 exec/s: 59 rss: 74Mb L: 90/91 MS: 1 ChangeBit- 00:07:16.974 [2024-11-20 07:03:21.360883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.974 [2024-11-20 07:03:21.360909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.974 [2024-11-20 07:03:21.360965] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.974 [2024-11-20 07:03:21.360980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.974 [2024-11-20 07:03:21.361035] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:16.974 [2024-11-20 07:03:21.361050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.974 [2024-11-20 07:03:21.361109] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:16.974 [2024-11-20 07:03:21.361122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.974 #60 NEW cov: 12503 ft: 15241 corp: 34/2591b lim: 100 exec/s: 60 rss: 74Mb L: 87/91 MS: 1 EraseBytes- 00:07:16.974 [2024-11-20 07:03:21.400775] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.974 [2024-11-20 07:03:21.400802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.974 [2024-11-20 07:03:21.400841] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.974 [2024-11-20 07:03:21.400857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.974 #61 NEW cov: 12503 ft: 15260 corp: 35/2643b lim: 100 exec/s: 61 rss: 74Mb L: 52/91 MS: 1 ShuffleBytes- 00:07:16.974 [2024-11-20 07:03:21.441111] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.974 [2024-11-20 07:03:21.441138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.974 [2024-11-20 07:03:21.441195] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.974 [2024-11-20 07:03:21.441209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.974 [2024-11-20 07:03:21.441265] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:16.974 [2024-11-20 07:03:21.441280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.974 [2024-11-20 07:03:21.441335] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:16.974 [2024-11-20 07:03:21.441350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.974 #62 NEW cov: 12503 ft: 15272 corp: 36/2734b lim: 100 exec/s: 62 rss: 75Mb L: 91/91 MS: 1 InsertByte- 00:07:16.974 [2024-11-20 07:03:21.501042] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:16.974 [2024-11-20 07:03:21.501071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.974 [2024-11-20 07:03:21.501113] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:16.974 [2024-11-20 07:03:21.501127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.974 #63 NEW cov: 12503 ft: 15277 corp: 37/2778b lim: 100 exec/s: 63 rss: 75Mb L: 44/91 MS: 1 EraseBytes- 00:07:17.233 [2024-11-20 07:03:21.541366] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:17.233 [2024-11-20 07:03:21.541396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.233 [2024-11-20 07:03:21.541442] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:17.233 [2024-11-20 07:03:21.541456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.233 [2024-11-20 07:03:21.541512] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:17.233 [2024-11-20 07:03:21.541528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.233 [2024-11-20 07:03:21.541585] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:17.233 [2024-11-20 07:03:21.541604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:17.233 #64 pulse cov: 12503 ft: 15281 corp: 37/2778b lim: 100 exec/s: 32 rss: 75Mb 00:07:17.233 #64 NEW cov: 12503 ft: 15281 corp: 38/2868b lim: 100 exec/s: 32 rss: 75Mb L: 90/91 MS: 1 ChangeBit- 00:07:17.233 #64 DONE cov: 12503 ft: 15281 corp: 38/2868b lim: 100 exec/s: 32 rss: 75Mb 00:07:17.233 Done 64 runs in 2 second(s) 00:07:17.233 07:03:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:07:17.233 07:03:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:17.233 07:03:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:17.233 07:03:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:07:17.233 07:03:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:07:17.233 07:03:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:17.233 07:03:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:17.233 07:03:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:17.233 07:03:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:07:17.234 07:03:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:17.234 07:03:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:17.234 07:03:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:07:17.234 07:03:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:07:17.234 07:03:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:17.234 07:03:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:07:17.234 07:03:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:17.234 07:03:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:17.234 07:03:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:17.234 07:03:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:07:17.234 [2024-11-20 07:03:21.714344] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:07:17.234 [2024-11-20 07:03:21.714424] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3783456 ] 00:07:17.492 [2024-11-20 07:03:21.900157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.492 [2024-11-20 07:03:21.933585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.492 [2024-11-20 07:03:21.992784] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:17.492 [2024-11-20 07:03:22.009112] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:07:17.492 INFO: Running with entropic power schedule (0xFF, 100). 00:07:17.492 INFO: Seed: 393107643 00:07:17.751 INFO: Loaded 1 modules (388174 inline 8-bit counters): 388174 [0x2c4d34c, 0x2cabf9a), 00:07:17.751 INFO: Loaded 1 PC tables (388174 PCs): 388174 [0x2cabfa0,0x3298480), 00:07:17.751 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:17.751 INFO: A corpus is not provided, starting from an empty corpus 00:07:17.751 #2 INITED exec/s: 0 rss: 65Mb 00:07:17.751 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:17.751 This may also happen if the target rejected all inputs we tried so far 00:07:17.751 [2024-11-20 07:03:22.079214] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:1 00:07:17.751 [2024-11-20 07:03:22.079250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.751 [2024-11-20 07:03:22.079367] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:17.751 [2024-11-20 07:03:22.079389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.751 [2024-11-20 07:03:22.079504] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:17.751 [2024-11-20 07:03:22.079530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.751 [2024-11-20 07:03:22.079656] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:07:17.751 [2024-11-20 07:03:22.079681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.011 NEW_FUNC[1/715]: 0x45c338 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:07:18.011 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:18.011 #18 NEW cov: 12254 ft: 12255 corp: 2/44b lim: 50 exec/s: 0 rss: 73Mb L: 43/43 MS: 1 InsertRepeatedBytes- 00:07:18.011 [2024-11-20 07:03:22.410037] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:1 00:07:18.011 [2024-11-20 07:03:22.410077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.011 [2024-11-20 07:03:22.410202] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:18.011 [2024-11-20 07:03:22.410225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.011 [2024-11-20 07:03:22.410343] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:18.011 [2024-11-20 07:03:22.410367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.011 [2024-11-20 07:03:22.410485] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:8 00:07:18.011 [2024-11-20 07:03:22.410512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.011 #24 NEW cov: 12367 ft: 12911 corp: 3/87b lim: 50 exec/s: 0 rss: 73Mb L: 43/43 MS: 1 ChangeBinInt- 00:07:18.011 [2024-11-20 07:03:22.470156] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:1 00:07:18.011 [2024-11-20 07:03:22.470191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.011 [2024-11-20 07:03:22.470270] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:18.011 [2024-11-20 07:03:22.470292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.011 [2024-11-20 07:03:22.470397] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:18.011 [2024-11-20 07:03:22.470418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.011 [2024-11-20 07:03:22.470533] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:8 00:07:18.011 [2024-11-20 07:03:22.470557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.011 #25 NEW cov: 12373 ft: 13126 corp: 4/130b lim: 50 exec/s: 0 rss: 73Mb L: 43/43 MS: 1 ShuffleBytes- 00:07:18.011 [2024-11-20 07:03:22.540302] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:61361545090695168 len:1 00:07:18.011 [2024-11-20 07:03:22.540337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.011 [2024-11-20 07:03:22.540403] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:18.011 [2024-11-20 07:03:22.540427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.011 [2024-11-20 07:03:22.540547] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:18.011 [2024-11-20 07:03:22.540571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.011 [2024-11-20 07:03:22.540691] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:8 00:07:18.011 [2024-11-20 07:03:22.540713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.270 #26 NEW cov: 12458 ft: 13357 corp: 5/173b lim: 50 exec/s: 0 rss: 73Mb L: 43/43 MS: 1 ChangeByte- 00:07:18.270 [2024-11-20 07:03:22.600437] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:1 00:07:18.271 [2024-11-20 07:03:22.600469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.271 [2024-11-20 07:03:22.600563] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:18.271 [2024-11-20 07:03:22.600585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.271 [2024-11-20 07:03:22.600700] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:18.271 [2024-11-20 07:03:22.600720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.271 [2024-11-20 07:03:22.600836] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:07:18.271 [2024-11-20 07:03:22.600860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.271 #27 NEW cov: 12458 ft: 13438 corp: 6/222b lim: 50 exec/s: 0 rss: 73Mb L: 49/49 MS: 1 CrossOver- 00:07:18.271 [2024-11-20 07:03:22.640590] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772286 len:1 00:07:18.271 [2024-11-20 07:03:22.640627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.271 [2024-11-20 07:03:22.640717] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:18.271 [2024-11-20 07:03:22.640741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.271 [2024-11-20 07:03:22.640851] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:18.271 [2024-11-20 07:03:22.640870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.271 [2024-11-20 07:03:22.640981] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:07:18.271 [2024-11-20 07:03:22.641002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.271 #28 NEW cov: 12458 ft: 13558 corp: 7/266b lim: 50 exec/s: 0 rss: 73Mb L: 44/49 MS: 1 InsertByte- 00:07:18.271 [2024-11-20 07:03:22.680233] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:7957419010369285742 len:28271 00:07:18.271 [2024-11-20 07:03:22.680260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.271 #33 NEW cov: 12458 ft: 13941 corp: 8/283b lim: 50 exec/s: 0 rss: 73Mb L: 17/49 MS: 5 InsertByte-EraseBytes-ChangeByte-CrossOver-InsertRepeatedBytes- 00:07:18.271 [2024-11-20 07:03:22.730860] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:1 00:07:18.271 [2024-11-20 07:03:22.730890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.271 [2024-11-20 07:03:22.730982] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:18.271 [2024-11-20 07:03:22.731006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.271 [2024-11-20 07:03:22.731109] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:18.271 [2024-11-20 07:03:22.731132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.271 [2024-11-20 07:03:22.731245] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2348810240 len:8 00:07:18.271 [2024-11-20 07:03:22.731267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.271 #34 NEW cov: 12458 ft: 14066 corp: 9/326b lim: 50 exec/s: 0 rss: 73Mb L: 43/49 MS: 1 ChangeByte- 00:07:18.271 [2024-11-20 07:03:22.771203] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:1 00:07:18.271 [2024-11-20 07:03:22.771233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.271 [2024-11-20 07:03:22.771311] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:18.271 [2024-11-20 07:03:22.771332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.271 [2024-11-20 07:03:22.771442] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:18.271 [2024-11-20 07:03:22.771461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.271 [2024-11-20 07:03:22.771578] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2348810240 len:8 00:07:18.271 [2024-11-20 07:03:22.771603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.271 [2024-11-20 07:03:22.771725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18446744069414649855 len:65281 00:07:18.271 [2024-11-20 07:03:22.771747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:18.271 #35 NEW cov: 12458 ft: 14126 corp: 10/376b lim: 50 exec/s: 0 rss: 73Mb L: 50/50 MS: 1 InsertRepeatedBytes- 00:07:18.530 [2024-11-20 07:03:22.840742] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:7957309059206508142 len:28271 00:07:18.530 [2024-11-20 07:03:22.840768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.530 #36 NEW cov: 12458 ft: 14285 corp: 11/393b lim: 50 exec/s: 0 rss: 73Mb L: 17/50 MS: 1 CrossOver- 00:07:18.530 [2024-11-20 07:03:22.910932] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:7957418581006876782 len:28271 00:07:18.530 [2024-11-20 07:03:22.910958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.530 NEW_FUNC[1/1]: 0x1c34f28 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:18.530 #37 NEW cov: 12481 ft: 14341 corp: 12/411b lim: 50 exec/s: 0 rss: 74Mb L: 18/50 MS: 1 CrossOver- 00:07:18.530 [2024-11-20 07:03:22.981546] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:1 00:07:18.530 [2024-11-20 07:03:22.981576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.530 [2024-11-20 07:03:22.981680] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:18.530 [2024-11-20 07:03:22.981711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.530 [2024-11-20 07:03:22.981823] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:18.530 [2024-11-20 07:03:22.981847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.530 [2024-11-20 07:03:22.981964] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:07:18.531 [2024-11-20 07:03:22.981984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.531 #38 NEW cov: 12481 ft: 14363 corp: 13/454b lim: 50 exec/s: 0 rss: 74Mb L: 43/50 MS: 1 ShuffleBytes- 00:07:18.531 [2024-11-20 07:03:23.021928] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:1 00:07:18.531 [2024-11-20 07:03:23.021959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.531 [2024-11-20 07:03:23.022026] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:18.531 [2024-11-20 07:03:23.022046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.531 [2024-11-20 07:03:23.022158] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:4 len:1 00:07:18.531 [2024-11-20 07:03:23.022181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.531 [2024-11-20 07:03:23.022289] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2348810240 len:8 00:07:18.531 [2024-11-20 07:03:23.022312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.531 [2024-11-20 07:03:23.022422] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18446744069414649855 len:65281 00:07:18.531 [2024-11-20 07:03:23.022445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:18.531 #39 NEW cov: 12481 ft: 14374 corp: 14/504b lim: 50 exec/s: 39 rss: 74Mb L: 50/50 MS: 1 ChangeBit- 00:07:18.531 [2024-11-20 07:03:23.081913] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:1 00:07:18.531 [2024-11-20 07:03:23.081946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.531 [2024-11-20 07:03:23.082044] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:18.531 [2024-11-20 07:03:23.082062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.531 [2024-11-20 07:03:23.082174] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:18.531 [2024-11-20 07:03:23.082196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.531 [2024-11-20 07:03:23.082309] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:07:18.531 [2024-11-20 07:03:23.082329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.789 #40 NEW cov: 12481 ft: 14435 corp: 15/553b lim: 50 exec/s: 40 rss: 74Mb L: 49/50 MS: 1 ChangeBinInt- 00:07:18.789 [2024-11-20 07:03:23.151608] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5931894172722287186 len:21075 00:07:18.789 [2024-11-20 07:03:23.151641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.789 #41 NEW cov: 12481 ft: 14452 corp: 16/571b lim: 50 exec/s: 41 rss: 74Mb L: 18/50 MS: 1 InsertRepeatedBytes- 00:07:18.789 [2024-11-20 07:03:23.202083] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:7957297591643827822 len:1 00:07:18.789 [2024-11-20 07:03:23.202120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.789 [2024-11-20 07:03:23.202219] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:18.789 [2024-11-20 07:03:23.202242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.789 [2024-11-20 07:03:23.202373] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:7957297593462976110 len:1793 00:07:18.789 [2024-11-20 07:03:23.202397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.789 #42 NEW cov: 12481 ft: 14699 corp: 17/602b lim: 50 exec/s: 42 rss: 74Mb L: 31/50 MS: 1 InsertRepeatedBytes- 00:07:18.789 [2024-11-20 07:03:23.252431] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772286 len:36 00:07:18.789 [2024-11-20 07:03:23.252465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.789 [2024-11-20 07:03:23.252555] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:18.789 [2024-11-20 07:03:23.252577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.789 [2024-11-20 07:03:23.252694] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:18.789 [2024-11-20 07:03:23.252718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.789 [2024-11-20 07:03:23.252833] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:07:18.789 [2024-11-20 07:03:23.252858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.789 #43 NEW cov: 12481 ft: 14729 corp: 18/647b lim: 50 exec/s: 43 rss: 74Mb L: 45/50 MS: 1 InsertByte- 00:07:18.789 [2024-11-20 07:03:23.322827] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772206 len:1 00:07:18.789 [2024-11-20 07:03:23.322858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.789 [2024-11-20 07:03:23.322934] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:18.789 [2024-11-20 07:03:23.322954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.789 [2024-11-20 07:03:23.323064] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:18.789 [2024-11-20 07:03:23.323088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.789 [2024-11-20 07:03:23.323190] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:07:18.789 [2024-11-20 07:03:23.323213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.789 [2024-11-20 07:03:23.323323] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:0 len:1 00:07:18.789 [2024-11-20 07:03:23.323347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:18.789 #44 NEW cov: 12481 ft: 14740 corp: 19/697b lim: 50 exec/s: 44 rss: 74Mb L: 50/50 MS: 1 InsertByte- 00:07:19.048 [2024-11-20 07:03:23.372622] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:1 00:07:19.048 [2024-11-20 07:03:23.372655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.048 [2024-11-20 07:03:23.372719] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:19.048 [2024-11-20 07:03:23.372741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.048 [2024-11-20 07:03:23.372856] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:8 00:07:19.048 [2024-11-20 07:03:23.372879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.048 #45 NEW cov: 12481 ft: 14844 corp: 20/730b lim: 50 exec/s: 45 rss: 74Mb L: 33/50 MS: 1 EraseBytes- 00:07:19.048 [2024-11-20 07:03:23.423030] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772286 len:36 00:07:19.049 [2024-11-20 07:03:23.423062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.049 [2024-11-20 07:03:23.423170] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:19.049 [2024-11-20 07:03:23.423191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.049 [2024-11-20 07:03:23.423303] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:219657512419328 len:51144 00:07:19.049 [2024-11-20 07:03:23.423326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.049 [2024-11-20 07:03:23.423440] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:07:19.049 [2024-11-20 07:03:23.423464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.049 #46 NEW cov: 12481 ft: 14867 corp: 21/779b lim: 50 exec/s: 46 rss: 74Mb L: 49/50 MS: 1 InsertRepeatedBytes- 00:07:19.049 [2024-11-20 07:03:23.493450] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772286 len:1 00:07:19.049 [2024-11-20 07:03:23.493485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.049 [2024-11-20 07:03:23.493603] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:19.049 [2024-11-20 07:03:23.493624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.049 [2024-11-20 07:03:23.493738] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:42 len:1 00:07:19.049 [2024-11-20 07:03:23.493757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.049 [2024-11-20 07:03:23.493865] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:07:19.049 [2024-11-20 07:03:23.493886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.049 #47 NEW cov: 12481 ft: 14982 corp: 22/824b lim: 50 exec/s: 47 rss: 74Mb L: 45/50 MS: 1 InsertByte- 00:07:19.049 [2024-11-20 07:03:23.543633] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:1 00:07:19.049 [2024-11-20 07:03:23.543664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.049 [2024-11-20 07:03:23.543720] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:19.049 [2024-11-20 07:03:23.543738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.049 [2024-11-20 07:03:23.543846] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:30786325577728 len:1 00:07:19.049 [2024-11-20 07:03:23.543871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.049 [2024-11-20 07:03:23.543984] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:07:19.049 [2024-11-20 07:03:23.544007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.049 [2024-11-20 07:03:23.544124] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:0 len:1 00:07:19.049 [2024-11-20 07:03:23.544145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:19.049 #48 NEW cov: 12481 ft: 15008 corp: 23/874b lim: 50 exec/s: 48 rss: 74Mb L: 50/50 MS: 1 InsertByte- 00:07:19.049 [2024-11-20 07:03:23.593566] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:1 00:07:19.049 [2024-11-20 07:03:23.593604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.049 [2024-11-20 07:03:23.593710] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:19.049 [2024-11-20 07:03:23.593731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.049 [2024-11-20 07:03:23.593846] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1152921504606846976 len:1 00:07:19.049 [2024-11-20 07:03:23.593867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.049 [2024-11-20 07:03:23.593984] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:8 00:07:19.049 [2024-11-20 07:03:23.594006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.308 #49 NEW cov: 12481 ft: 15024 corp: 24/917b lim: 50 exec/s: 49 rss: 74Mb L: 43/50 MS: 1 ChangeBit- 00:07:19.308 [2024-11-20 07:03:23.643729] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:1 00:07:19.308 [2024-11-20 07:03:23.643761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.308 [2024-11-20 07:03:23.643840] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:19.308 [2024-11-20 07:03:23.643863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.308 [2024-11-20 07:03:23.643976] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:19.308 [2024-11-20 07:03:23.644001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.308 [2024-11-20 07:03:23.644115] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:2049 00:07:19.308 [2024-11-20 07:03:23.644137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.308 #50 NEW cov: 12481 ft: 15056 corp: 25/963b lim: 50 exec/s: 50 rss: 74Mb L: 46/50 MS: 1 EraseBytes- 00:07:19.308 [2024-11-20 07:03:23.713870] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:1 00:07:19.308 [2024-11-20 07:03:23.713902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.308 [2024-11-20 07:03:23.713973] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:19.308 [2024-11-20 07:03:23.713994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.308 [2024-11-20 07:03:23.714102] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5136152270307590144 len:1 00:07:19.308 [2024-11-20 07:03:23.714125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.308 [2024-11-20 07:03:23.714238] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:07:19.308 [2024-11-20 07:03:23.714261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.308 #51 NEW cov: 12481 ft: 15070 corp: 26/1010b lim: 50 exec/s: 51 rss: 74Mb L: 47/50 MS: 1 InsertRepeatedBytes- 00:07:19.308 [2024-11-20 07:03:23.764050] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:61361545090695168 len:1 00:07:19.308 [2024-11-20 07:03:23.764083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.308 [2024-11-20 07:03:23.764162] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:19.308 [2024-11-20 07:03:23.764184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.308 [2024-11-20 07:03:23.764291] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:19.308 [2024-11-20 07:03:23.764314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.308 [2024-11-20 07:03:23.764429] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5963776 len:1 00:07:19.308 [2024-11-20 07:03:23.764451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.308 #52 NEW cov: 12481 ft: 15074 corp: 27/1054b lim: 50 exec/s: 52 rss: 74Mb L: 44/50 MS: 1 InsertByte- 00:07:19.308 [2024-11-20 07:03:23.834259] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:184549502 len:36 00:07:19.308 [2024-11-20 07:03:23.834288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.308 [2024-11-20 07:03:23.834367] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:19.308 [2024-11-20 07:03:23.834387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.308 [2024-11-20 07:03:23.834511] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:219657512419328 len:51144 00:07:19.308 [2024-11-20 07:03:23.834533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.308 [2024-11-20 07:03:23.834656] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:07:19.308 [2024-11-20 07:03:23.834679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.569 #53 NEW cov: 12481 ft: 15078 corp: 28/1103b lim: 50 exec/s: 53 rss: 74Mb L: 49/50 MS: 1 ChangeBit- 00:07:19.569 [2024-11-20 07:03:23.894641] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:1 00:07:19.569 [2024-11-20 07:03:23.894674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.569 [2024-11-20 07:03:23.894766] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:19.569 [2024-11-20 07:03:23.894790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.569 [2024-11-20 07:03:23.894902] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:4 len:1 00:07:19.569 [2024-11-20 07:03:23.894925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.569 [2024-11-20 07:03:23.895039] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2348810240 len:8 00:07:19.569 [2024-11-20 07:03:23.895062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.569 [2024-11-20 07:03:23.895181] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18446744069414649855 len:65281 00:07:19.569 [2024-11-20 07:03:23.895202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:19.569 #54 NEW cov: 12481 ft: 15102 corp: 29/1153b lim: 50 exec/s: 54 rss: 75Mb L: 50/50 MS: 1 ShuffleBytes- 00:07:19.569 [2024-11-20 07:03:23.954581] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:1 00:07:19.569 [2024-11-20 07:03:23.954618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.569 [2024-11-20 07:03:23.954726] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:19.569 [2024-11-20 07:03:23.954748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.569 [2024-11-20 07:03:23.954857] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:140 len:1 00:07:19.569 [2024-11-20 07:03:23.954884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.569 [2024-11-20 07:03:23.954996] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:72057589742962432 len:65536 00:07:19.569 [2024-11-20 07:03:23.955019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.569 #55 NEW cov: 12481 ft: 15144 corp: 30/1196b lim: 50 exec/s: 55 rss: 75Mb L: 43/50 MS: 1 EraseBytes- 00:07:19.569 [2024-11-20 07:03:23.994446] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:1 00:07:19.569 [2024-11-20 07:03:23.994478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.569 [2024-11-20 07:03:23.994586] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:536870912000 len:1 00:07:19.569 [2024-11-20 07:03:23.994611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.569 [2024-11-20 07:03:23.994734] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:19.569 [2024-11-20 07:03:23.994755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.569 #56 NEW cov: 12481 ft: 15157 corp: 31/1230b lim: 50 exec/s: 56 rss: 75Mb L: 34/50 MS: 1 InsertByte- 00:07:19.570 [2024-11-20 07:03:24.064629] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772286 len:1 00:07:19.570 [2024-11-20 07:03:24.064660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.570 [2024-11-20 07:03:24.064749] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:19.570 [2024-11-20 07:03:24.064770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.570 [2024-11-20 07:03:24.064886] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:7696581394432 len:1 00:07:19.570 [2024-11-20 07:03:24.064909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.570 #57 NEW cov: 12481 ft: 15219 corp: 32/1260b lim: 50 exec/s: 28 rss: 75Mb L: 30/50 MS: 1 EraseBytes- 00:07:19.570 #57 DONE cov: 12481 ft: 15219 corp: 32/1260b lim: 50 exec/s: 28 rss: 75Mb 00:07:19.570 Done 57 runs in 2 second(s) 00:07:19.829 07:03:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:07:19.829 07:03:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:19.829 07:03:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:19.829 07:03:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:07:19.829 07:03:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:07:19.829 07:03:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:19.829 07:03:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:19.829 07:03:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:19.829 07:03:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:07:19.829 07:03:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:19.829 07:03:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:19.829 07:03:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:07:19.829 07:03:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:07:19.829 07:03:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:19.829 07:03:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:07:19.829 07:03:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:19.829 07:03:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:19.829 07:03:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:19.829 07:03:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:07:19.829 [2024-11-20 07:03:24.231059] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:07:19.829 [2024-11-20 07:03:24.231131] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3783969 ] 00:07:20.087 [2024-11-20 07:03:24.416009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.087 [2024-11-20 07:03:24.450427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.087 [2024-11-20 07:03:24.510444] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:20.087 [2024-11-20 07:03:24.526650] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:20.087 INFO: Running with entropic power schedule (0xFF, 100). 00:07:20.087 INFO: Seed: 2909141288 00:07:20.087 INFO: Loaded 1 modules (388174 inline 8-bit counters): 388174 [0x2c4d34c, 0x2cabf9a), 00:07:20.087 INFO: Loaded 1 PC tables (388174 PCs): 388174 [0x2cabfa0,0x3298480), 00:07:20.087 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:20.087 INFO: A corpus is not provided, starting from an empty corpus 00:07:20.087 #2 INITED exec/s: 0 rss: 65Mb 00:07:20.087 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:20.087 This may also happen if the target rejected all inputs we tried so far 00:07:20.087 [2024-11-20 07:03:24.574122] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:20.087 [2024-11-20 07:03:24.574153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.345 NEW_FUNC[1/717]: 0x45def8 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:07:20.345 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:20.345 #16 NEW cov: 12312 ft: 12308 corp: 2/21b lim: 90 exec/s: 0 rss: 73Mb L: 20/20 MS: 4 CrossOver-CrossOver-InsertByte-InsertRepeatedBytes- 00:07:20.605 [2024-11-20 07:03:24.904966] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:20.605 [2024-11-20 07:03:24.905007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.605 #20 NEW cov: 12425 ft: 12877 corp: 3/43b lim: 90 exec/s: 0 rss: 73Mb L: 22/22 MS: 4 CopyPart-ChangeBit-CopyPart-InsertRepeatedBytes- 00:07:20.605 [2024-11-20 07:03:24.945250] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:20.605 [2024-11-20 07:03:24.945279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.605 [2024-11-20 07:03:24.945315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:20.605 [2024-11-20 07:03:24.945331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.605 [2024-11-20 07:03:24.945386] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:20.605 [2024-11-20 07:03:24.945404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.605 #21 NEW cov: 12431 ft: 13985 corp: 4/107b lim: 90 exec/s: 0 rss: 73Mb L: 64/64 MS: 1 InsertRepeatedBytes- 00:07:20.605 [2024-11-20 07:03:25.005131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:20.605 [2024-11-20 07:03:25.005160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.605 #22 NEW cov: 12516 ft: 14280 corp: 5/129b lim: 90 exec/s: 0 rss: 73Mb L: 22/64 MS: 1 ChangeBit- 00:07:20.605 [2024-11-20 07:03:25.045503] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:20.605 [2024-11-20 07:03:25.045530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.605 [2024-11-20 07:03:25.045585] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:20.605 [2024-11-20 07:03:25.045605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.605 [2024-11-20 07:03:25.045658] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:20.605 [2024-11-20 07:03:25.045675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.605 #27 NEW cov: 12516 ft: 14356 corp: 6/193b lim: 90 exec/s: 0 rss: 73Mb L: 64/64 MS: 5 CrossOver-ChangeByte-ShuffleBytes-ShuffleBytes-InsertRepeatedBytes- 00:07:20.605 [2024-11-20 07:03:25.085311] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:20.605 [2024-11-20 07:03:25.085339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.605 #28 NEW cov: 12516 ft: 14496 corp: 7/215b lim: 90 exec/s: 0 rss: 73Mb L: 22/64 MS: 1 ChangeBit- 00:07:20.605 [2024-11-20 07:03:25.125434] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:20.605 [2024-11-20 07:03:25.125461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.864 #29 NEW cov: 12516 ft: 14570 corp: 8/237b lim: 90 exec/s: 0 rss: 73Mb L: 22/64 MS: 1 ChangeBinInt- 00:07:20.864 [2024-11-20 07:03:25.185937] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:20.864 [2024-11-20 07:03:25.185966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.864 [2024-11-20 07:03:25.186018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:20.864 [2024-11-20 07:03:25.186035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.864 [2024-11-20 07:03:25.186089] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:20.864 [2024-11-20 07:03:25.186105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.864 #30 NEW cov: 12516 ft: 14605 corp: 9/301b lim: 90 exec/s: 0 rss: 73Mb L: 64/64 MS: 1 ChangeBinInt- 00:07:20.864 [2024-11-20 07:03:25.245940] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:20.864 [2024-11-20 07:03:25.245967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.865 [2024-11-20 07:03:25.246006] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:20.865 [2024-11-20 07:03:25.246027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.865 #31 NEW cov: 12516 ft: 14943 corp: 10/354b lim: 90 exec/s: 0 rss: 73Mb L: 53/64 MS: 1 InsertRepeatedBytes- 00:07:20.865 [2024-11-20 07:03:25.285997] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:20.865 [2024-11-20 07:03:25.286023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.865 [2024-11-20 07:03:25.286072] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:20.865 [2024-11-20 07:03:25.286087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.865 #32 NEW cov: 12516 ft: 14978 corp: 11/399b lim: 90 exec/s: 0 rss: 73Mb L: 45/64 MS: 1 InsertRepeatedBytes- 00:07:20.865 [2024-11-20 07:03:25.346038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:20.865 [2024-11-20 07:03:25.346065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.865 #33 NEW cov: 12516 ft: 15007 corp: 12/419b lim: 90 exec/s: 0 rss: 74Mb L: 20/64 MS: 1 CrossOver- 00:07:20.865 [2024-11-20 07:03:25.406242] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:20.865 [2024-11-20 07:03:25.406269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.124 #34 NEW cov: 12516 ft: 15035 corp: 13/441b lim: 90 exec/s: 0 rss: 74Mb L: 22/64 MS: 1 ChangeBinInt- 00:07:21.124 [2024-11-20 07:03:25.446296] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.124 [2024-11-20 07:03:25.446324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.124 NEW_FUNC[1/1]: 0x1c34f28 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:21.124 #35 NEW cov: 12539 ft: 15091 corp: 14/463b lim: 90 exec/s: 0 rss: 74Mb L: 22/64 MS: 1 ChangeByte- 00:07:21.124 [2024-11-20 07:03:25.486737] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.124 [2024-11-20 07:03:25.486764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.124 [2024-11-20 07:03:25.486800] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:21.124 [2024-11-20 07:03:25.486816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.124 [2024-11-20 07:03:25.486869] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:21.124 [2024-11-20 07:03:25.486885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.124 #36 NEW cov: 12539 ft: 15192 corp: 15/527b lim: 90 exec/s: 0 rss: 74Mb L: 64/64 MS: 1 ChangeBinInt- 00:07:21.124 [2024-11-20 07:03:25.546613] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.124 [2024-11-20 07:03:25.546641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.124 #37 NEW cov: 12539 ft: 15235 corp: 16/547b lim: 90 exec/s: 37 rss: 74Mb L: 20/64 MS: 1 ChangeByte- 00:07:21.124 [2024-11-20 07:03:25.607060] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.124 [2024-11-20 07:03:25.607089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.124 [2024-11-20 07:03:25.607139] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:21.124 [2024-11-20 07:03:25.607159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.124 [2024-11-20 07:03:25.607213] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:21.124 [2024-11-20 07:03:25.607229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.124 #38 NEW cov: 12539 ft: 15246 corp: 17/611b lim: 90 exec/s: 38 rss: 74Mb L: 64/64 MS: 1 ShuffleBytes- 00:07:21.124 [2024-11-20 07:03:25.666948] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.124 [2024-11-20 07:03:25.666975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.383 #39 NEW cov: 12539 ft: 15266 corp: 18/633b lim: 90 exec/s: 39 rss: 74Mb L: 22/64 MS: 1 CrossOver- 00:07:21.383 [2024-11-20 07:03:25.707478] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.383 [2024-11-20 07:03:25.707504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.383 [2024-11-20 07:03:25.707555] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:21.383 [2024-11-20 07:03:25.707570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.383 [2024-11-20 07:03:25.707624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:21.383 [2024-11-20 07:03:25.707654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.383 [2024-11-20 07:03:25.707708] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:21.383 [2024-11-20 07:03:25.707723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.383 #40 NEW cov: 12539 ft: 15663 corp: 19/709b lim: 90 exec/s: 40 rss: 74Mb L: 76/76 MS: 1 InsertRepeatedBytes- 00:07:21.383 [2024-11-20 07:03:25.747136] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.383 [2024-11-20 07:03:25.747164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.383 #41 NEW cov: 12539 ft: 15677 corp: 20/730b lim: 90 exec/s: 41 rss: 74Mb L: 21/76 MS: 1 InsertByte- 00:07:21.383 [2024-11-20 07:03:25.807350] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.383 [2024-11-20 07:03:25.807378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.383 #42 NEW cov: 12539 ft: 15709 corp: 21/752b lim: 90 exec/s: 42 rss: 74Mb L: 22/76 MS: 1 ChangeBinInt- 00:07:21.383 [2024-11-20 07:03:25.867669] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.383 [2024-11-20 07:03:25.867696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.383 [2024-11-20 07:03:25.867735] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:21.383 [2024-11-20 07:03:25.867751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.383 #43 NEW cov: 12539 ft: 15720 corp: 22/797b lim: 90 exec/s: 43 rss: 74Mb L: 45/76 MS: 1 CopyPart- 00:07:21.383 [2024-11-20 07:03:25.907917] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.383 [2024-11-20 07:03:25.907945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.383 [2024-11-20 07:03:25.907990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:21.383 [2024-11-20 07:03:25.908008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.383 [2024-11-20 07:03:25.908060] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:21.383 [2024-11-20 07:03:25.908075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.643 #44 NEW cov: 12539 ft: 15743 corp: 23/861b lim: 90 exec/s: 44 rss: 74Mb L: 64/76 MS: 1 ChangeBit- 00:07:21.643 [2024-11-20 07:03:25.967947] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.643 [2024-11-20 07:03:25.967975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.643 [2024-11-20 07:03:25.968026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:21.643 [2024-11-20 07:03:25.968041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.643 #45 NEW cov: 12539 ft: 15758 corp: 24/897b lim: 90 exec/s: 45 rss: 74Mb L: 36/76 MS: 1 InsertRepeatedBytes- 00:07:21.643 [2024-11-20 07:03:26.028459] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.643 [2024-11-20 07:03:26.028487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.643 [2024-11-20 07:03:26.028526] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:21.643 [2024-11-20 07:03:26.028540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.643 [2024-11-20 07:03:26.028593] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:21.643 [2024-11-20 07:03:26.028614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.643 [2024-11-20 07:03:26.028669] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:21.643 [2024-11-20 07:03:26.028684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.643 #46 NEW cov: 12539 ft: 15770 corp: 25/985b lim: 90 exec/s: 46 rss: 74Mb L: 88/88 MS: 1 InsertRepeatedBytes- 00:07:21.643 [2024-11-20 07:03:26.068060] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.643 [2024-11-20 07:03:26.068088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.643 #47 NEW cov: 12539 ft: 15806 corp: 26/1007b lim: 90 exec/s: 47 rss: 75Mb L: 22/88 MS: 1 ShuffleBytes- 00:07:21.643 [2024-11-20 07:03:26.128406] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.643 [2024-11-20 07:03:26.128433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.643 [2024-11-20 07:03:26.128470] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:21.643 [2024-11-20 07:03:26.128486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.643 #48 NEW cov: 12539 ft: 15822 corp: 27/1052b lim: 90 exec/s: 48 rss: 75Mb L: 45/88 MS: 1 ChangeByte- 00:07:21.643 [2024-11-20 07:03:26.168678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.643 [2024-11-20 07:03:26.168705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.643 [2024-11-20 07:03:26.168751] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:21.643 [2024-11-20 07:03:26.168770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.643 [2024-11-20 07:03:26.168823] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:21.643 [2024-11-20 07:03:26.168838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.902 #49 NEW cov: 12539 ft: 15826 corp: 28/1117b lim: 90 exec/s: 49 rss: 75Mb L: 65/88 MS: 1 InsertByte- 00:07:21.902 [2024-11-20 07:03:26.229035] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.902 [2024-11-20 07:03:26.229063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.902 [2024-11-20 07:03:26.229121] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:21.902 [2024-11-20 07:03:26.229137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.902 [2024-11-20 07:03:26.229189] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:21.902 [2024-11-20 07:03:26.229204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.902 [2024-11-20 07:03:26.229259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:21.902 [2024-11-20 07:03:26.229276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.902 #50 NEW cov: 12539 ft: 15829 corp: 29/1193b lim: 90 exec/s: 50 rss: 75Mb L: 76/88 MS: 1 InsertRepeatedBytes- 00:07:21.902 [2024-11-20 07:03:26.288858] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.902 [2024-11-20 07:03:26.288887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.902 [2024-11-20 07:03:26.288926] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:21.902 [2024-11-20 07:03:26.288942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.902 #54 NEW cov: 12539 ft: 15844 corp: 30/1233b lim: 90 exec/s: 54 rss: 75Mb L: 40/88 MS: 4 EraseBytes-EraseBytes-CopyPart-InsertRepeatedBytes- 00:07:21.902 [2024-11-20 07:03:26.349012] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.902 [2024-11-20 07:03:26.349039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.902 [2024-11-20 07:03:26.349085] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:21.902 [2024-11-20 07:03:26.349100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.902 #55 NEW cov: 12539 ft: 15866 corp: 31/1278b lim: 90 exec/s: 55 rss: 75Mb L: 45/88 MS: 1 ChangeBit- 00:07:21.902 [2024-11-20 07:03:26.409204] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:21.902 [2024-11-20 07:03:26.409231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.902 [2024-11-20 07:03:26.409282] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:21.902 [2024-11-20 07:03:26.409298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.902 #56 NEW cov: 12539 ft: 15889 corp: 32/1324b lim: 90 exec/s: 56 rss: 75Mb L: 46/88 MS: 1 InsertByte- 00:07:22.165 [2024-11-20 07:03:26.469216] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:22.165 [2024-11-20 07:03:26.469246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.165 #57 NEW cov: 12539 ft: 15893 corp: 33/1347b lim: 90 exec/s: 57 rss: 75Mb L: 23/88 MS: 1 InsertByte- 00:07:22.165 [2024-11-20 07:03:26.509729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:22.165 [2024-11-20 07:03:26.509757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.165 [2024-11-20 07:03:26.509803] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:22.165 [2024-11-20 07:03:26.509819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.165 [2024-11-20 07:03:26.509873] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:22.165 [2024-11-20 07:03:26.509905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.165 [2024-11-20 07:03:26.509961] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:22.165 [2024-11-20 07:03:26.509977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:22.165 #58 NEW cov: 12539 ft: 15910 corp: 34/1435b lim: 90 exec/s: 58 rss: 75Mb L: 88/88 MS: 1 InsertRepeatedBytes- 00:07:22.165 [2024-11-20 07:03:26.549378] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:22.165 [2024-11-20 07:03:26.549405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.165 #59 NEW cov: 12539 ft: 15922 corp: 35/1462b lim: 90 exec/s: 29 rss: 75Mb L: 27/88 MS: 1 EraseBytes- 00:07:22.165 #59 DONE cov: 12539 ft: 15922 corp: 35/1462b lim: 90 exec/s: 29 rss: 75Mb 00:07:22.165 Done 59 runs in 2 second(s) 00:07:22.165 07:03:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:07:22.165 07:03:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:22.165 07:03:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:22.165 07:03:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:07:22.165 07:03:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:07:22.165 07:03:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:22.165 07:03:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:22.165 07:03:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:22.165 07:03:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:07:22.165 07:03:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:22.165 07:03:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:22.165 07:03:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:07:22.165 07:03:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:07:22.165 07:03:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:22.165 07:03:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:07:22.165 07:03:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:22.165 07:03:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:22.165 07:03:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:22.165 07:03:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:07:22.519 [2024-11-20 07:03:26.743874] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:07:22.519 [2024-11-20 07:03:26.743945] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3784505 ] 00:07:22.519 [2024-11-20 07:03:26.933005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.519 [2024-11-20 07:03:26.965787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.519 [2024-11-20 07:03:27.025109] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.519 [2024-11-20 07:03:27.041465] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:07:22.795 INFO: Running with entropic power schedule (0xFF, 100). 00:07:22.795 INFO: Seed: 1130149946 00:07:22.795 INFO: Loaded 1 modules (388174 inline 8-bit counters): 388174 [0x2c4d34c, 0x2cabf9a), 00:07:22.795 INFO: Loaded 1 PC tables (388174 PCs): 388174 [0x2cabfa0,0x3298480), 00:07:22.795 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:22.795 INFO: A corpus is not provided, starting from an empty corpus 00:07:22.795 #2 INITED exec/s: 0 rss: 66Mb 00:07:22.795 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:22.795 This may also happen if the target rejected all inputs we tried so far 00:07:22.795 [2024-11-20 07:03:27.090313] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:22.795 [2024-11-20 07:03:27.090344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.795 [2024-11-20 07:03:27.090398] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:22.795 [2024-11-20 07:03:27.090414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.053 NEW_FUNC[1/716]: 0x461128 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:07:23.054 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:23.054 #6 NEW cov: 12285 ft: 12284 corp: 2/21b lim: 50 exec/s: 0 rss: 73Mb L: 20/20 MS: 4 ChangeByte-CrossOver-ChangeByte-InsertRepeatedBytes- 00:07:23.054 [2024-11-20 07:03:27.411032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.054 [2024-11-20 07:03:27.411064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.054 [2024-11-20 07:03:27.411136] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:23.054 [2024-11-20 07:03:27.411153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.054 NEW_FUNC[1/1]: 0x179fe28 in nvme_qpair_get_state /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1569 00:07:23.054 #7 NEW cov: 12400 ft: 12879 corp: 3/41b lim: 50 exec/s: 0 rss: 73Mb L: 20/20 MS: 1 ShuffleBytes- 00:07:23.054 [2024-11-20 07:03:27.470967] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.054 [2024-11-20 07:03:27.470995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.054 #16 NEW cov: 12406 ft: 13893 corp: 4/52b lim: 50 exec/s: 0 rss: 73Mb L: 11/20 MS: 4 InsertByte-CrossOver-CopyPart-CMP- DE: "\000\214\330\015v\006\337\240"- 00:07:23.054 [2024-11-20 07:03:27.511208] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.054 [2024-11-20 07:03:27.511236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.054 [2024-11-20 07:03:27.511275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:23.054 [2024-11-20 07:03:27.511291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.054 #17 NEW cov: 12491 ft: 14327 corp: 5/72b lim: 50 exec/s: 0 rss: 73Mb L: 20/20 MS: 1 ChangeByte- 00:07:23.054 [2024-11-20 07:03:27.551319] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.054 [2024-11-20 07:03:27.551347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.054 [2024-11-20 07:03:27.551402] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:23.054 [2024-11-20 07:03:27.551419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.054 #18 NEW cov: 12491 ft: 14411 corp: 6/92b lim: 50 exec/s: 0 rss: 73Mb L: 20/20 MS: 1 ChangeBinInt- 00:07:23.054 [2024-11-20 07:03:27.591311] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.054 [2024-11-20 07:03:27.591339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.313 #19 NEW cov: 12491 ft: 14479 corp: 7/104b lim: 50 exec/s: 0 rss: 74Mb L: 12/20 MS: 1 InsertByte- 00:07:23.313 [2024-11-20 07:03:27.651707] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.313 [2024-11-20 07:03:27.651734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.313 #22 NEW cov: 12491 ft: 14643 corp: 8/120b lim: 50 exec/s: 0 rss: 74Mb L: 16/20 MS: 3 ChangeBit-ChangeBit-CrossOver- 00:07:23.313 [2024-11-20 07:03:27.691702] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.313 [2024-11-20 07:03:27.691731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.313 [2024-11-20 07:03:27.691784] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:23.313 [2024-11-20 07:03:27.691800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.313 #23 NEW cov: 12491 ft: 14706 corp: 9/140b lim: 50 exec/s: 0 rss: 74Mb L: 20/20 MS: 1 ShuffleBytes- 00:07:23.313 [2024-11-20 07:03:27.751943] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.313 [2024-11-20 07:03:27.751972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.313 [2024-11-20 07:03:27.752015] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:23.313 [2024-11-20 07:03:27.752031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.313 #24 NEW cov: 12491 ft: 14811 corp: 10/160b lim: 50 exec/s: 0 rss: 74Mb L: 20/20 MS: 1 ChangeByte- 00:07:23.313 [2024-11-20 07:03:27.792020] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.313 [2024-11-20 07:03:27.792048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.313 [2024-11-20 07:03:27.792102] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:23.313 [2024-11-20 07:03:27.792118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.313 #25 NEW cov: 12491 ft: 14865 corp: 11/181b lim: 50 exec/s: 0 rss: 74Mb L: 21/21 MS: 1 InsertByte- 00:07:23.313 [2024-11-20 07:03:27.832433] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.313 [2024-11-20 07:03:27.832462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.313 [2024-11-20 07:03:27.832506] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:23.313 [2024-11-20 07:03:27.832522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.313 [2024-11-20 07:03:27.832578] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:23.313 [2024-11-20 07:03:27.832593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.313 [2024-11-20 07:03:27.832659] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:23.313 [2024-11-20 07:03:27.832674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.572 #26 NEW cov: 12491 ft: 15268 corp: 12/230b lim: 50 exec/s: 0 rss: 74Mb L: 49/49 MS: 1 InsertRepeatedBytes- 00:07:23.572 [2024-11-20 07:03:27.892305] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.573 [2024-11-20 07:03:27.892333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.573 [2024-11-20 07:03:27.892385] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:23.573 [2024-11-20 07:03:27.892401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.573 #27 NEW cov: 12491 ft: 15282 corp: 13/253b lim: 50 exec/s: 0 rss: 74Mb L: 23/49 MS: 1 CrossOver- 00:07:23.573 [2024-11-20 07:03:27.932882] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.573 [2024-11-20 07:03:27.932910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.573 [2024-11-20 07:03:27.932966] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:23.573 [2024-11-20 07:03:27.932981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.573 [2024-11-20 07:03:27.933038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:23.573 [2024-11-20 07:03:27.933054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.573 [2024-11-20 07:03:27.933108] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:23.573 [2024-11-20 07:03:27.933122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.573 [2024-11-20 07:03:27.933177] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:23.573 [2024-11-20 07:03:27.933193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:23.573 NEW_FUNC[1/1]: 0x1c34f28 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:23.573 #28 NEW cov: 12514 ft: 15391 corp: 14/303b lim: 50 exec/s: 0 rss: 74Mb L: 50/50 MS: 1 InsertRepeatedBytes- 00:07:23.573 [2024-11-20 07:03:27.992912] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.573 [2024-11-20 07:03:27.992939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.573 [2024-11-20 07:03:27.992999] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:23.573 [2024-11-20 07:03:27.993025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.573 [2024-11-20 07:03:27.993094] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:23.573 [2024-11-20 07:03:27.993110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.573 [2024-11-20 07:03:27.993168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:23.573 [2024-11-20 07:03:27.993183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.573 #29 NEW cov: 12514 ft: 15397 corp: 15/352b lim: 50 exec/s: 0 rss: 74Mb L: 49/50 MS: 1 ChangeBit- 00:07:23.573 [2024-11-20 07:03:28.053204] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.573 [2024-11-20 07:03:28.053233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.573 [2024-11-20 07:03:28.053287] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:23.573 [2024-11-20 07:03:28.053302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.573 [2024-11-20 07:03:28.053355] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:23.573 [2024-11-20 07:03:28.053370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.573 [2024-11-20 07:03:28.053424] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:23.573 [2024-11-20 07:03:28.053439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.573 [2024-11-20 07:03:28.053493] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:23.573 [2024-11-20 07:03:28.053507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:23.573 #30 NEW cov: 12514 ft: 15437 corp: 16/402b lim: 50 exec/s: 30 rss: 74Mb L: 50/50 MS: 1 InsertByte- 00:07:23.573 [2024-11-20 07:03:28.092684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.573 [2024-11-20 07:03:28.092712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.573 #31 NEW cov: 12514 ft: 15463 corp: 17/418b lim: 50 exec/s: 31 rss: 74Mb L: 16/50 MS: 1 ChangeByte- 00:07:23.832 [2024-11-20 07:03:28.133007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.832 [2024-11-20 07:03:28.133034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.832 [2024-11-20 07:03:28.133072] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:23.832 [2024-11-20 07:03:28.133087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.832 #32 NEW cov: 12514 ft: 15522 corp: 18/438b lim: 50 exec/s: 32 rss: 74Mb L: 20/50 MS: 1 PersAutoDict- DE: "\000\214\330\015v\006\337\240"- 00:07:23.832 [2024-11-20 07:03:28.173442] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.832 [2024-11-20 07:03:28.173470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.832 [2024-11-20 07:03:28.173534] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:23.832 [2024-11-20 07:03:28.173552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.832 [2024-11-20 07:03:28.173612] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:23.832 [2024-11-20 07:03:28.173626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.832 [2024-11-20 07:03:28.173683] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:23.832 [2024-11-20 07:03:28.173698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.832 #33 NEW cov: 12514 ft: 15542 corp: 19/482b lim: 50 exec/s: 33 rss: 74Mb L: 44/50 MS: 1 InsertRepeatedBytes- 00:07:23.832 [2024-11-20 07:03:28.213525] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.832 [2024-11-20 07:03:28.213551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.832 [2024-11-20 07:03:28.213611] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:23.832 [2024-11-20 07:03:28.213627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.832 [2024-11-20 07:03:28.213680] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:23.832 [2024-11-20 07:03:28.213695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.832 [2024-11-20 07:03:28.213749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:23.832 [2024-11-20 07:03:28.213764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.832 #34 NEW cov: 12514 ft: 15617 corp: 20/531b lim: 50 exec/s: 34 rss: 74Mb L: 49/50 MS: 1 ChangeBinInt- 00:07:23.832 [2024-11-20 07:03:28.273727] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.832 [2024-11-20 07:03:28.273755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.832 [2024-11-20 07:03:28.273803] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:23.832 [2024-11-20 07:03:28.273819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.832 [2024-11-20 07:03:28.273872] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:23.832 [2024-11-20 07:03:28.273903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.832 [2024-11-20 07:03:28.273956] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:23.832 [2024-11-20 07:03:28.273972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.832 #35 NEW cov: 12514 ft: 15640 corp: 21/580b lim: 50 exec/s: 35 rss: 74Mb L: 49/50 MS: 1 ShuffleBytes- 00:07:23.832 [2024-11-20 07:03:28.313847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.832 [2024-11-20 07:03:28.313874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.832 [2024-11-20 07:03:28.313925] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:23.832 [2024-11-20 07:03:28.313940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.832 [2024-11-20 07:03:28.313999] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:23.832 [2024-11-20 07:03:28.314014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.832 [2024-11-20 07:03:28.314070] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:23.832 [2024-11-20 07:03:28.314084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.832 #36 NEW cov: 12514 ft: 15650 corp: 22/629b lim: 50 exec/s: 36 rss: 74Mb L: 49/50 MS: 1 ShuffleBytes- 00:07:23.832 [2024-11-20 07:03:28.373690] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:23.832 [2024-11-20 07:03:28.373717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.832 [2024-11-20 07:03:28.373757] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:23.832 [2024-11-20 07:03:28.373773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.091 [2024-11-20 07:03:28.433860] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:24.091 [2024-11-20 07:03:28.433887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.091 [2024-11-20 07:03:28.433926] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:24.091 [2024-11-20 07:03:28.433942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.091 #38 NEW cov: 12514 ft: 15656 corp: 23/649b lim: 50 exec/s: 38 rss: 74Mb L: 20/50 MS: 2 ChangeBinInt-ShuffleBytes- 00:07:24.091 [2024-11-20 07:03:28.474431] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:24.091 [2024-11-20 07:03:28.474458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.091 [2024-11-20 07:03:28.474515] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:24.091 [2024-11-20 07:03:28.474530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.091 [2024-11-20 07:03:28.474583] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:24.091 [2024-11-20 07:03:28.474604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.091 [2024-11-20 07:03:28.474660] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:24.091 [2024-11-20 07:03:28.474675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.091 [2024-11-20 07:03:28.474730] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:24.091 [2024-11-20 07:03:28.474745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:24.091 #39 NEW cov: 12514 ft: 15684 corp: 24/699b lim: 50 exec/s: 39 rss: 74Mb L: 50/50 MS: 1 CopyPart- 00:07:24.091 [2024-11-20 07:03:28.514067] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:24.091 [2024-11-20 07:03:28.514095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.091 [2024-11-20 07:03:28.514133] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:24.091 [2024-11-20 07:03:28.514149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.091 #40 NEW cov: 12514 ft: 15704 corp: 25/728b lim: 50 exec/s: 40 rss: 74Mb L: 29/50 MS: 1 PersAutoDict- DE: "\000\214\330\015v\006\337\240"- 00:07:24.091 [2024-11-20 07:03:28.574718] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:24.091 [2024-11-20 07:03:28.574746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.091 [2024-11-20 07:03:28.574803] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:24.091 [2024-11-20 07:03:28.574819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.091 [2024-11-20 07:03:28.574889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:24.091 [2024-11-20 07:03:28.574905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.091 [2024-11-20 07:03:28.574959] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:24.091 [2024-11-20 07:03:28.574975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.091 [2024-11-20 07:03:28.575029] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:24.091 [2024-11-20 07:03:28.575045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:24.091 #41 NEW cov: 12514 ft: 15719 corp: 26/778b lim: 50 exec/s: 41 rss: 75Mb L: 50/50 MS: 1 ChangeBit- 00:07:24.091 [2024-11-20 07:03:28.634234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:24.091 [2024-11-20 07:03:28.634261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.349 #42 NEW cov: 12514 ft: 15737 corp: 27/794b lim: 50 exec/s: 42 rss: 75Mb L: 16/50 MS: 1 ShuffleBytes- 00:07:24.349 [2024-11-20 07:03:28.694581] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:24.349 [2024-11-20 07:03:28.694613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.349 [2024-11-20 07:03:28.694651] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:24.349 [2024-11-20 07:03:28.694665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.349 #43 NEW cov: 12514 ft: 15748 corp: 28/814b lim: 50 exec/s: 43 rss: 75Mb L: 20/50 MS: 1 ChangeBit- 00:07:24.349 [2024-11-20 07:03:28.755190] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:24.349 [2024-11-20 07:03:28.755217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.349 [2024-11-20 07:03:28.755272] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:24.349 [2024-11-20 07:03:28.755287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.349 [2024-11-20 07:03:28.755340] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:24.349 [2024-11-20 07:03:28.755355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.349 [2024-11-20 07:03:28.755409] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:24.349 [2024-11-20 07:03:28.755424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.349 [2024-11-20 07:03:28.755478] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:24.349 [2024-11-20 07:03:28.755497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:24.349 #44 NEW cov: 12514 ft: 15771 corp: 29/864b lim: 50 exec/s: 44 rss: 75Mb L: 50/50 MS: 1 CopyPart- 00:07:24.349 [2024-11-20 07:03:28.814901] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:24.349 [2024-11-20 07:03:28.814928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.349 [2024-11-20 07:03:28.814968] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:24.349 [2024-11-20 07:03:28.814983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.349 #50 NEW cov: 12514 ft: 15782 corp: 30/884b lim: 50 exec/s: 50 rss: 75Mb L: 20/50 MS: 1 CopyPart- 00:07:24.349 [2024-11-20 07:03:28.855034] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:24.349 [2024-11-20 07:03:28.855062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.349 [2024-11-20 07:03:28.855116] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:24.349 [2024-11-20 07:03:28.855134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.349 #51 NEW cov: 12514 ft: 15790 corp: 31/904b lim: 50 exec/s: 51 rss: 75Mb L: 20/50 MS: 1 ChangeBit- 00:07:24.349 [2024-11-20 07:03:28.895587] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:24.349 [2024-11-20 07:03:28.895619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.349 [2024-11-20 07:03:28.895678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:24.349 [2024-11-20 07:03:28.895694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.349 [2024-11-20 07:03:28.895751] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:24.349 [2024-11-20 07:03:28.895766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.349 [2024-11-20 07:03:28.895823] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:24.349 [2024-11-20 07:03:28.895838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.349 [2024-11-20 07:03:28.895894] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:24.349 [2024-11-20 07:03:28.895911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:24.608 #52 NEW cov: 12514 ft: 15804 corp: 32/954b lim: 50 exec/s: 52 rss: 75Mb L: 50/50 MS: 1 ChangeBinInt- 00:07:24.608 [2024-11-20 07:03:28.935263] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:24.608 [2024-11-20 07:03:28.935290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.608 [2024-11-20 07:03:28.935349] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:24.608 [2024-11-20 07:03:28.935365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.608 #53 NEW cov: 12514 ft: 15827 corp: 33/974b lim: 50 exec/s: 53 rss: 75Mb L: 20/50 MS: 1 CrossOver- 00:07:24.608 [2024-11-20 07:03:28.975192] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:24.608 [2024-11-20 07:03:28.975223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.608 #54 NEW cov: 12514 ft: 15860 corp: 34/990b lim: 50 exec/s: 54 rss: 75Mb L: 16/50 MS: 1 ChangeBinInt- 00:07:24.608 [2024-11-20 07:03:29.035530] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:24.608 [2024-11-20 07:03:29.035557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.608 [2024-11-20 07:03:29.035596] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:24.608 [2024-11-20 07:03:29.035617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.608 #55 NEW cov: 12514 ft: 15883 corp: 35/1010b lim: 50 exec/s: 55 rss: 75Mb L: 20/50 MS: 1 ChangeBit- 00:07:24.608 [2024-11-20 07:03:29.076098] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:24.608 [2024-11-20 07:03:29.076124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.608 [2024-11-20 07:03:29.076182] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:24.608 [2024-11-20 07:03:29.076197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.608 [2024-11-20 07:03:29.076252] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:24.608 [2024-11-20 07:03:29.076265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.608 [2024-11-20 07:03:29.076322] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:24.608 [2024-11-20 07:03:29.076336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.608 [2024-11-20 07:03:29.076391] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:24.608 [2024-11-20 07:03:29.076406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:24.608 #56 NEW cov: 12514 ft: 15898 corp: 36/1060b lim: 50 exec/s: 28 rss: 75Mb L: 50/50 MS: 1 CopyPart- 00:07:24.608 #56 DONE cov: 12514 ft: 15898 corp: 36/1060b lim: 50 exec/s: 28 rss: 75Mb 00:07:24.608 ###### Recommended dictionary. ###### 00:07:24.608 "\000\214\330\015v\006\337\240" # Uses: 2 00:07:24.608 ###### End of recommended dictionary. ###### 00:07:24.608 Done 56 runs in 2 second(s) 00:07:24.867 07:03:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:07:24.867 07:03:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:24.867 07:03:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:24.867 07:03:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:07:24.867 07:03:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:07:24.867 07:03:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:24.867 07:03:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:24.867 07:03:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:24.867 07:03:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:07:24.867 07:03:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:24.867 07:03:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:24.867 07:03:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:07:24.867 07:03:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:07:24.867 07:03:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:24.867 07:03:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:07:24.867 07:03:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:24.867 07:03:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:24.867 07:03:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:24.867 07:03:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:07:24.867 [2024-11-20 07:03:29.269756] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:07:24.867 [2024-11-20 07:03:29.269830] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3784799 ] 00:07:25.125 [2024-11-20 07:03:29.460636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.125 [2024-11-20 07:03:29.494190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.125 [2024-11-20 07:03:29.553502] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.125 [2024-11-20 07:03:29.569848] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:07:25.125 INFO: Running with entropic power schedule (0xFF, 100). 00:07:25.125 INFO: Seed: 3658138068 00:07:25.125 INFO: Loaded 1 modules (388174 inline 8-bit counters): 388174 [0x2c4d34c, 0x2cabf9a), 00:07:25.125 INFO: Loaded 1 PC tables (388174 PCs): 388174 [0x2cabfa0,0x3298480), 00:07:25.125 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:25.125 INFO: A corpus is not provided, starting from an empty corpus 00:07:25.125 #2 INITED exec/s: 0 rss: 65Mb 00:07:25.125 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:25.125 This may also happen if the target rejected all inputs we tried so far 00:07:25.125 [2024-11-20 07:03:29.638948] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:25.125 [2024-11-20 07:03:29.638984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.125 [2024-11-20 07:03:29.639115] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:25.125 [2024-11-20 07:03:29.639139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.643 NEW_FUNC[1/717]: 0x4633f8 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:07:25.643 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:25.643 #5 NEW cov: 12313 ft: 12314 corp: 2/37b lim: 85 exec/s: 0 rss: 73Mb L: 36/36 MS: 3 ChangeByte-InsertByte-InsertRepeatedBytes- 00:07:25.643 [2024-11-20 07:03:29.980139] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:25.643 [2024-11-20 07:03:29.980191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.643 [2024-11-20 07:03:29.980326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:25.643 [2024-11-20 07:03:29.980358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.643 #6 NEW cov: 12426 ft: 12890 corp: 3/73b lim: 85 exec/s: 0 rss: 73Mb L: 36/36 MS: 1 ChangeBit- 00:07:25.643 [2024-11-20 07:03:30.049963] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:25.643 [2024-11-20 07:03:30.049997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.643 #7 NEW cov: 12432 ft: 13860 corp: 4/98b lim: 85 exec/s: 0 rss: 73Mb L: 25/36 MS: 1 EraseBytes- 00:07:25.643 [2024-11-20 07:03:30.120153] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:25.643 [2024-11-20 07:03:30.120193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.643 #8 NEW cov: 12517 ft: 14267 corp: 5/123b lim: 85 exec/s: 0 rss: 73Mb L: 25/36 MS: 1 ChangeBit- 00:07:25.643 [2024-11-20 07:03:30.190332] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:25.643 [2024-11-20 07:03:30.190361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.901 #9 NEW cov: 12517 ft: 14349 corp: 6/148b lim: 85 exec/s: 0 rss: 73Mb L: 25/36 MS: 1 ChangeBinInt- 00:07:25.901 [2024-11-20 07:03:30.260842] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:25.901 [2024-11-20 07:03:30.260877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.902 [2024-11-20 07:03:30.261013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:25.902 [2024-11-20 07:03:30.261051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.902 #10 NEW cov: 12517 ft: 14426 corp: 7/185b lim: 85 exec/s: 0 rss: 73Mb L: 37/37 MS: 1 InsertByte- 00:07:25.902 [2024-11-20 07:03:30.311302] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:25.902 [2024-11-20 07:03:30.311339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.902 [2024-11-20 07:03:30.311417] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:25.902 [2024-11-20 07:03:30.311440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.902 [2024-11-20 07:03:30.311564] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:25.902 [2024-11-20 07:03:30.311592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.902 #11 NEW cov: 12517 ft: 14825 corp: 8/245b lim: 85 exec/s: 0 rss: 73Mb L: 60/60 MS: 1 InsertRepeatedBytes- 00:07:25.902 [2024-11-20 07:03:30.380992] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:25.902 [2024-11-20 07:03:30.381019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.902 #17 NEW cov: 12517 ft: 14852 corp: 9/270b lim: 85 exec/s: 0 rss: 73Mb L: 25/60 MS: 1 ChangeBit- 00:07:25.902 [2024-11-20 07:03:30.431058] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:25.902 [2024-11-20 07:03:30.431089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.902 #18 NEW cov: 12517 ft: 15017 corp: 10/299b lim: 85 exec/s: 0 rss: 74Mb L: 29/60 MS: 1 CMP- DE: "\023\325V\341"- 00:07:26.160 [2024-11-20 07:03:30.481664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.160 [2024-11-20 07:03:30.481696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.160 [2024-11-20 07:03:30.481829] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:26.160 [2024-11-20 07:03:30.481852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.160 NEW_FUNC[1/1]: 0x1c34f28 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:26.160 #19 NEW cov: 12540 ft: 15113 corp: 11/341b lim: 85 exec/s: 0 rss: 74Mb L: 42/60 MS: 1 CrossOver- 00:07:26.160 [2024-11-20 07:03:30.551852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.160 [2024-11-20 07:03:30.551885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.160 [2024-11-20 07:03:30.552021] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:26.160 [2024-11-20 07:03:30.552048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.160 #20 NEW cov: 12540 ft: 15163 corp: 12/383b lim: 85 exec/s: 0 rss: 74Mb L: 42/60 MS: 1 ChangeBinInt- 00:07:26.160 [2024-11-20 07:03:30.621788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.160 [2024-11-20 07:03:30.621815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.161 #21 NEW cov: 12540 ft: 15178 corp: 13/405b lim: 85 exec/s: 21 rss: 74Mb L: 22/60 MS: 1 EraseBytes- 00:07:26.161 [2024-11-20 07:03:30.671891] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.161 [2024-11-20 07:03:30.671932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.161 #22 NEW cov: 12540 ft: 15334 corp: 14/438b lim: 85 exec/s: 22 rss: 74Mb L: 33/60 MS: 1 CopyPart- 00:07:26.419 [2024-11-20 07:03:30.722843] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.419 [2024-11-20 07:03:30.722878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.419 [2024-11-20 07:03:30.723004] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:26.419 [2024-11-20 07:03:30.723028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.419 [2024-11-20 07:03:30.723149] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:26.419 [2024-11-20 07:03:30.723172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.419 [2024-11-20 07:03:30.723304] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:26.419 [2024-11-20 07:03:30.723328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.419 #23 NEW cov: 12540 ft: 15703 corp: 15/510b lim: 85 exec/s: 23 rss: 74Mb L: 72/72 MS: 1 InsertRepeatedBytes- 00:07:26.419 [2024-11-20 07:03:30.782159] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.419 [2024-11-20 07:03:30.782186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.419 #24 NEW cov: 12540 ft: 15751 corp: 16/543b lim: 85 exec/s: 24 rss: 74Mb L: 33/72 MS: 1 PersAutoDict- DE: "\023\325V\341"- 00:07:26.419 [2024-11-20 07:03:30.852491] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.419 [2024-11-20 07:03:30.852540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.419 #25 NEW cov: 12540 ft: 15764 corp: 17/572b lim: 85 exec/s: 25 rss: 74Mb L: 29/72 MS: 1 EraseBytes- 00:07:26.419 [2024-11-20 07:03:30.922713] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.419 [2024-11-20 07:03:30.922743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.419 #26 NEW cov: 12540 ft: 15826 corp: 18/594b lim: 85 exec/s: 26 rss: 74Mb L: 22/72 MS: 1 ChangeByte- 00:07:26.678 [2024-11-20 07:03:30.993237] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.678 [2024-11-20 07:03:30.993266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.678 [2024-11-20 07:03:30.993394] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:26.678 [2024-11-20 07:03:30.993422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.678 #27 NEW cov: 12540 ft: 15864 corp: 19/640b lim: 85 exec/s: 27 rss: 74Mb L: 46/72 MS: 1 InsertRepeatedBytes- 00:07:26.678 [2024-11-20 07:03:31.062978] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.678 [2024-11-20 07:03:31.063011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.678 #28 NEW cov: 12540 ft: 15876 corp: 20/673b lim: 85 exec/s: 28 rss: 74Mb L: 33/72 MS: 1 ChangeBit- 00:07:26.678 [2024-11-20 07:03:31.113464] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.678 [2024-11-20 07:03:31.113500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.678 [2024-11-20 07:03:31.113633] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:26.678 [2024-11-20 07:03:31.113656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.678 #29 NEW cov: 12540 ft: 15893 corp: 21/709b lim: 85 exec/s: 29 rss: 74Mb L: 36/72 MS: 1 ChangeBinInt- 00:07:26.678 [2024-11-20 07:03:31.163672] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.678 [2024-11-20 07:03:31.163706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.678 [2024-11-20 07:03:31.163843] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:26.678 [2024-11-20 07:03:31.163867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.678 #30 NEW cov: 12540 ft: 15924 corp: 22/745b lim: 85 exec/s: 30 rss: 74Mb L: 36/72 MS: 1 PersAutoDict- DE: "\023\325V\341"- 00:07:26.678 [2024-11-20 07:03:31.213845] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.678 [2024-11-20 07:03:31.213878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.678 [2024-11-20 07:03:31.214002] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:26.678 [2024-11-20 07:03:31.214025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.938 #31 NEW cov: 12540 ft: 15955 corp: 23/785b lim: 85 exec/s: 31 rss: 74Mb L: 40/72 MS: 1 PersAutoDict- DE: "\023\325V\341"- 00:07:26.938 [2024-11-20 07:03:31.284314] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.938 [2024-11-20 07:03:31.284349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.938 [2024-11-20 07:03:31.284457] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:26.938 [2024-11-20 07:03:31.284479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.938 [2024-11-20 07:03:31.284604] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:26.938 [2024-11-20 07:03:31.284627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.938 #32 NEW cov: 12540 ft: 15976 corp: 24/845b lim: 85 exec/s: 32 rss: 75Mb L: 60/72 MS: 1 CrossOver- 00:07:26.938 [2024-11-20 07:03:31.353970] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.938 [2024-11-20 07:03:31.354006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.938 #33 NEW cov: 12540 ft: 15985 corp: 25/878b lim: 85 exec/s: 33 rss: 75Mb L: 33/72 MS: 1 ChangeByte- 00:07:26.938 [2024-11-20 07:03:31.404989] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.938 [2024-11-20 07:03:31.405023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.938 [2024-11-20 07:03:31.405098] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:26.938 [2024-11-20 07:03:31.405120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.938 [2024-11-20 07:03:31.405238] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:26.938 [2024-11-20 07:03:31.405262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.938 [2024-11-20 07:03:31.405375] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:26.938 [2024-11-20 07:03:31.405398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.938 #34 NEW cov: 12540 ft: 16022 corp: 26/950b lim: 85 exec/s: 34 rss: 75Mb L: 72/72 MS: 1 CMP- DE: "\021\000\000\000\000\000\000\000"- 00:07:26.938 [2024-11-20 07:03:31.474622] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:26.938 [2024-11-20 07:03:31.474649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.938 [2024-11-20 07:03:31.474786] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:26.938 [2024-11-20 07:03:31.474810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.197 #35 NEW cov: 12540 ft: 16051 corp: 27/986b lim: 85 exec/s: 35 rss: 75Mb L: 36/72 MS: 1 ChangeByte- 00:07:27.197 [2024-11-20 07:03:31.524637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:27.197 [2024-11-20 07:03:31.524663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.197 #36 NEW cov: 12540 ft: 16066 corp: 28/1003b lim: 85 exec/s: 36 rss: 75Mb L: 17/72 MS: 1 EraseBytes- 00:07:27.197 [2024-11-20 07:03:31.574746] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:27.197 [2024-11-20 07:03:31.574771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.197 #37 NEW cov: 12540 ft: 16083 corp: 29/1025b lim: 85 exec/s: 37 rss: 75Mb L: 22/72 MS: 1 ChangeByte- 00:07:27.197 [2024-11-20 07:03:31.625444] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:27.197 [2024-11-20 07:03:31.625474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.197 [2024-11-20 07:03:31.625579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:27.197 [2024-11-20 07:03:31.625604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.197 [2024-11-20 07:03:31.625723] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:27.197 [2024-11-20 07:03:31.625745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.197 #38 NEW cov: 12540 ft: 16101 corp: 30/1085b lim: 85 exec/s: 19 rss: 75Mb L: 60/72 MS: 1 ChangeBinInt- 00:07:27.197 #38 DONE cov: 12540 ft: 16101 corp: 30/1085b lim: 85 exec/s: 19 rss: 75Mb 00:07:27.197 ###### Recommended dictionary. ###### 00:07:27.197 "\023\325V\341" # Uses: 3 00:07:27.197 "\021\000\000\000\000\000\000\000" # Uses: 0 00:07:27.197 ###### End of recommended dictionary. ###### 00:07:27.197 Done 38 runs in 2 second(s) 00:07:27.456 07:03:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:07:27.456 07:03:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:27.456 07:03:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:27.456 07:03:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:07:27.456 07:03:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:07:27.456 07:03:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:27.456 07:03:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:27.456 07:03:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:27.456 07:03:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:07:27.456 07:03:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:27.456 07:03:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:27.456 07:03:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:07:27.456 07:03:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:07:27.456 07:03:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:27.456 07:03:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:07:27.456 07:03:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:27.456 07:03:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:27.456 07:03:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:27.456 07:03:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:07:27.457 [2024-11-20 07:03:31.817642] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:07:27.457 [2024-11-20 07:03:31.817710] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3785329 ] 00:07:27.457 [2024-11-20 07:03:32.003711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.716 [2024-11-20 07:03:32.037939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.716 [2024-11-20 07:03:32.097008] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.716 [2024-11-20 07:03:32.113276] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:07:27.716 INFO: Running with entropic power schedule (0xFF, 100). 00:07:27.716 INFO: Seed: 1908166473 00:07:27.716 INFO: Loaded 1 modules (388174 inline 8-bit counters): 388174 [0x2c4d34c, 0x2cabf9a), 00:07:27.716 INFO: Loaded 1 PC tables (388174 PCs): 388174 [0x2cabfa0,0x3298480), 00:07:27.716 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:27.716 INFO: A corpus is not provided, starting from an empty corpus 00:07:27.716 #2 INITED exec/s: 0 rss: 65Mb 00:07:27.716 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:27.716 This may also happen if the target rejected all inputs we tried so far 00:07:27.716 [2024-11-20 07:03:32.158680] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:27.716 [2024-11-20 07:03:32.158709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.716 [2024-11-20 07:03:32.158751] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:27.716 [2024-11-20 07:03:32.158767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.974 NEW_FUNC[1/716]: 0x466638 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:07:27.974 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:27.974 #4 NEW cov: 12246 ft: 12235 corp: 2/15b lim: 25 exec/s: 0 rss: 73Mb L: 14/14 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:27.974 [2024-11-20 07:03:32.500857] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:27.974 [2024-11-20 07:03:32.500909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.974 [2024-11-20 07:03:32.501062] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:27.974 [2024-11-20 07:03:32.501094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.974 [2024-11-20 07:03:32.501257] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:27.974 [2024-11-20 07:03:32.501285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.232 #5 NEW cov: 12359 ft: 13218 corp: 3/32b lim: 25 exec/s: 0 rss: 73Mb L: 17/17 MS: 1 InsertRepeatedBytes- 00:07:28.232 [2024-11-20 07:03:32.580618] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:28.232 [2024-11-20 07:03:32.580647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.232 [2024-11-20 07:03:32.580793] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:28.232 [2024-11-20 07:03:32.580822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.232 #6 NEW cov: 12365 ft: 13434 corp: 4/46b lim: 25 exec/s: 0 rss: 73Mb L: 14/17 MS: 1 ChangeBinInt- 00:07:28.232 [2024-11-20 07:03:32.631147] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:28.232 [2024-11-20 07:03:32.631184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.232 [2024-11-20 07:03:32.631316] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:28.232 [2024-11-20 07:03:32.631340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.232 [2024-11-20 07:03:32.631478] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:28.232 [2024-11-20 07:03:32.631505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.232 [2024-11-20 07:03:32.631641] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:28.232 [2024-11-20 07:03:32.631670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.232 #7 NEW cov: 12450 ft: 14222 corp: 5/67b lim: 25 exec/s: 0 rss: 73Mb L: 21/21 MS: 1 InsertRepeatedBytes- 00:07:28.232 [2024-11-20 07:03:32.701630] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:28.232 [2024-11-20 07:03:32.701665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.232 [2024-11-20 07:03:32.701776] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:28.232 [2024-11-20 07:03:32.701800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.232 [2024-11-20 07:03:32.701941] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:28.232 [2024-11-20 07:03:32.701966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.232 [2024-11-20 07:03:32.702107] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:28.232 [2024-11-20 07:03:32.702131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.232 [2024-11-20 07:03:32.702274] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:28.232 [2024-11-20 07:03:32.702299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:28.232 #8 NEW cov: 12450 ft: 14329 corp: 6/92b lim: 25 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 CrossOver- 00:07:28.232 [2024-11-20 07:03:32.751138] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:28.232 [2024-11-20 07:03:32.751177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.232 [2024-11-20 07:03:32.751341] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:28.232 [2024-11-20 07:03:32.751367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.232 #9 NEW cov: 12450 ft: 14444 corp: 7/106b lim: 25 exec/s: 0 rss: 73Mb L: 14/25 MS: 1 ShuffleBytes- 00:07:28.491 [2024-11-20 07:03:32.801505] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:28.492 [2024-11-20 07:03:32.801543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.492 [2024-11-20 07:03:32.801667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:28.492 [2024-11-20 07:03:32.801690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.492 [2024-11-20 07:03:32.801827] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:28.492 [2024-11-20 07:03:32.801852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.492 #10 NEW cov: 12450 ft: 14485 corp: 8/124b lim: 25 exec/s: 0 rss: 73Mb L: 18/25 MS: 1 InsertByte- 00:07:28.492 [2024-11-20 07:03:32.871738] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:28.492 [2024-11-20 07:03:32.871765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.492 [2024-11-20 07:03:32.871903] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:28.492 [2024-11-20 07:03:32.871926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.492 #11 NEW cov: 12450 ft: 14499 corp: 9/138b lim: 25 exec/s: 0 rss: 73Mb L: 14/25 MS: 1 ChangeBit- 00:07:28.492 [2024-11-20 07:03:32.942522] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:28.492 [2024-11-20 07:03:32.942556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.492 [2024-11-20 07:03:32.942656] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:28.492 [2024-11-20 07:03:32.942681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.492 [2024-11-20 07:03:32.942822] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:28.492 [2024-11-20 07:03:32.942848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.492 [2024-11-20 07:03:32.942989] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:28.492 [2024-11-20 07:03:32.943017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.492 #12 NEW cov: 12450 ft: 14592 corp: 10/160b lim: 25 exec/s: 0 rss: 73Mb L: 22/25 MS: 1 CrossOver- 00:07:28.492 [2024-11-20 07:03:32.992971] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:28.492 [2024-11-20 07:03:32.993006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.492 [2024-11-20 07:03:32.993095] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:28.492 [2024-11-20 07:03:32.993121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.492 [2024-11-20 07:03:32.993267] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:28.492 [2024-11-20 07:03:32.993295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.492 [2024-11-20 07:03:32.993437] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:28.492 [2024-11-20 07:03:32.993463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.492 [2024-11-20 07:03:32.993608] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:28.492 [2024-11-20 07:03:32.993632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:28.492 #13 NEW cov: 12450 ft: 14693 corp: 11/185b lim: 25 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 ChangeBinInt- 00:07:28.751 [2024-11-20 07:03:33.062830] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:28.751 [2024-11-20 07:03:33.062862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.751 [2024-11-20 07:03:33.063020] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:28.751 [2024-11-20 07:03:33.063044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.751 [2024-11-20 07:03:33.063190] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:28.751 [2024-11-20 07:03:33.063215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.751 NEW_FUNC[1/1]: 0x1c34f28 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:28.751 #14 NEW cov: 12473 ft: 14731 corp: 12/203b lim: 25 exec/s: 0 rss: 74Mb L: 18/25 MS: 1 ShuffleBytes- 00:07:28.751 [2024-11-20 07:03:33.133118] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:28.751 [2024-11-20 07:03:33.133154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.751 [2024-11-20 07:03:33.133291] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:28.751 [2024-11-20 07:03:33.133314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.751 [2024-11-20 07:03:33.133450] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:28.751 [2024-11-20 07:03:33.133472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.751 #15 NEW cov: 12473 ft: 14780 corp: 13/221b lim: 25 exec/s: 15 rss: 74Mb L: 18/25 MS: 1 ChangeByte- 00:07:28.751 [2024-11-20 07:03:33.203809] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:28.751 [2024-11-20 07:03:33.203842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.751 [2024-11-20 07:03:33.203940] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:28.751 [2024-11-20 07:03:33.203962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.751 [2024-11-20 07:03:33.204098] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:28.751 [2024-11-20 07:03:33.204125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.751 [2024-11-20 07:03:33.204259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:28.751 [2024-11-20 07:03:33.204281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.751 [2024-11-20 07:03:33.204427] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:28.751 [2024-11-20 07:03:33.204449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:28.751 #16 NEW cov: 12473 ft: 14826 corp: 14/246b lim: 25 exec/s: 16 rss: 74Mb L: 25/25 MS: 1 ShuffleBytes- 00:07:28.751 [2024-11-20 07:03:33.254026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:28.751 [2024-11-20 07:03:33.254060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.751 [2024-11-20 07:03:33.254155] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:28.751 [2024-11-20 07:03:33.254180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.751 [2024-11-20 07:03:33.254314] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:28.751 [2024-11-20 07:03:33.254339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.751 [2024-11-20 07:03:33.254484] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:28.751 [2024-11-20 07:03:33.254507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.751 [2024-11-20 07:03:33.254675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:28.751 [2024-11-20 07:03:33.254703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:28.751 #17 NEW cov: 12473 ft: 14935 corp: 15/271b lim: 25 exec/s: 17 rss: 74Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:07:28.751 [2024-11-20 07:03:33.303785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:28.751 [2024-11-20 07:03:33.303820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.751 [2024-11-20 07:03:33.303957] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:28.751 [2024-11-20 07:03:33.303980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.751 [2024-11-20 07:03:33.304123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:28.751 [2024-11-20 07:03:33.304149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.011 #18 NEW cov: 12473 ft: 15005 corp: 16/289b lim: 25 exec/s: 18 rss: 74Mb L: 18/25 MS: 1 ShuffleBytes- 00:07:29.011 [2024-11-20 07:03:33.354344] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:29.011 [2024-11-20 07:03:33.354377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.011 [2024-11-20 07:03:33.354484] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:29.011 [2024-11-20 07:03:33.354512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.011 [2024-11-20 07:03:33.354654] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:29.011 [2024-11-20 07:03:33.354678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.011 [2024-11-20 07:03:33.354815] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:29.011 [2024-11-20 07:03:33.354838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.011 [2024-11-20 07:03:33.354979] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:29.011 [2024-11-20 07:03:33.355001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:29.011 #19 NEW cov: 12473 ft: 15033 corp: 17/314b lim: 25 exec/s: 19 rss: 74Mb L: 25/25 MS: 1 ChangeByte- 00:07:29.011 [2024-11-20 07:03:33.424629] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:29.011 [2024-11-20 07:03:33.424661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.011 [2024-11-20 07:03:33.424761] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:29.011 [2024-11-20 07:03:33.424784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.011 [2024-11-20 07:03:33.424919] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:29.011 [2024-11-20 07:03:33.424946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.011 [2024-11-20 07:03:33.425087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:29.011 [2024-11-20 07:03:33.425113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.011 [2024-11-20 07:03:33.425253] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:29.011 [2024-11-20 07:03:33.425280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:29.011 #20 NEW cov: 12473 ft: 15049 corp: 18/339b lim: 25 exec/s: 20 rss: 74Mb L: 25/25 MS: 1 ShuffleBytes- 00:07:29.011 [2024-11-20 07:03:33.474596] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:29.011 [2024-11-20 07:03:33.474636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.011 [2024-11-20 07:03:33.474750] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:29.011 [2024-11-20 07:03:33.474774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.011 [2024-11-20 07:03:33.474921] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:29.011 [2024-11-20 07:03:33.474945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.011 [2024-11-20 07:03:33.475091] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:29.011 [2024-11-20 07:03:33.475118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.011 #21 NEW cov: 12473 ft: 15058 corp: 19/360b lim: 25 exec/s: 21 rss: 74Mb L: 21/25 MS: 1 EraseBytes- 00:07:29.011 [2024-11-20 07:03:33.545035] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:29.011 [2024-11-20 07:03:33.545068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.011 [2024-11-20 07:03:33.545186] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:29.011 [2024-11-20 07:03:33.545214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.011 [2024-11-20 07:03:33.545353] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:29.011 [2024-11-20 07:03:33.545378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.011 [2024-11-20 07:03:33.545518] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:29.011 [2024-11-20 07:03:33.545544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.011 [2024-11-20 07:03:33.545685] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:29.011 [2024-11-20 07:03:33.545710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:29.270 #22 NEW cov: 12473 ft: 15078 corp: 20/385b lim: 25 exec/s: 22 rss: 74Mb L: 25/25 MS: 1 CrossOver- 00:07:29.270 [2024-11-20 07:03:33.615345] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:29.270 [2024-11-20 07:03:33.615378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.270 [2024-11-20 07:03:33.615482] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:29.270 [2024-11-20 07:03:33.615514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.270 [2024-11-20 07:03:33.615650] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:29.270 [2024-11-20 07:03:33.615676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.270 [2024-11-20 07:03:33.615820] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:29.270 [2024-11-20 07:03:33.615843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.270 [2024-11-20 07:03:33.615991] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:29.270 [2024-11-20 07:03:33.616011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:29.270 #23 NEW cov: 12473 ft: 15085 corp: 21/410b lim: 25 exec/s: 23 rss: 74Mb L: 25/25 MS: 1 CopyPart- 00:07:29.270 [2024-11-20 07:03:33.685477] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:29.270 [2024-11-20 07:03:33.685511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.270 [2024-11-20 07:03:33.685633] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:29.270 [2024-11-20 07:03:33.685654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.270 [2024-11-20 07:03:33.685790] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:29.270 [2024-11-20 07:03:33.685816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.270 [2024-11-20 07:03:33.685956] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:29.270 [2024-11-20 07:03:33.685980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.270 [2024-11-20 07:03:33.686119] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:29.270 [2024-11-20 07:03:33.686143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:29.270 #24 NEW cov: 12473 ft: 15093 corp: 22/435b lim: 25 exec/s: 24 rss: 74Mb L: 25/25 MS: 1 CrossOver- 00:07:29.270 [2024-11-20 07:03:33.755494] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:29.270 [2024-11-20 07:03:33.755529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.270 [2024-11-20 07:03:33.755628] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:29.270 [2024-11-20 07:03:33.755647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.270 [2024-11-20 07:03:33.755787] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:29.270 [2024-11-20 07:03:33.755814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.270 [2024-11-20 07:03:33.755957] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:29.270 [2024-11-20 07:03:33.755977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.270 #25 NEW cov: 12473 ft: 15117 corp: 23/457b lim: 25 exec/s: 25 rss: 74Mb L: 22/25 MS: 1 InsertRepeatedBytes- 00:07:29.529 [2024-11-20 07:03:33.825918] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:29.529 [2024-11-20 07:03:33.825953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.529 [2024-11-20 07:03:33.826059] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:29.529 [2024-11-20 07:03:33.826080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.529 [2024-11-20 07:03:33.826232] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:29.529 [2024-11-20 07:03:33.826261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.529 [2024-11-20 07:03:33.826404] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:29.529 [2024-11-20 07:03:33.826427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.529 [2024-11-20 07:03:33.826570] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:29.529 [2024-11-20 07:03:33.826601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:29.529 #26 NEW cov: 12473 ft: 15139 corp: 24/482b lim: 25 exec/s: 26 rss: 74Mb L: 25/25 MS: 1 CopyPart- 00:07:29.529 [2024-11-20 07:03:33.875807] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:29.529 [2024-11-20 07:03:33.875842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.529 [2024-11-20 07:03:33.875951] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:29.529 [2024-11-20 07:03:33.875977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.529 [2024-11-20 07:03:33.876120] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:29.529 [2024-11-20 07:03:33.876155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.529 [2024-11-20 07:03:33.876294] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:29.529 [2024-11-20 07:03:33.876320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.529 #27 NEW cov: 12473 ft: 15152 corp: 25/506b lim: 25 exec/s: 27 rss: 74Mb L: 24/25 MS: 1 InsertRepeatedBytes- 00:07:29.529 [2024-11-20 07:03:33.925680] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:29.529 [2024-11-20 07:03:33.925717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.529 [2024-11-20 07:03:33.925874] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:29.529 [2024-11-20 07:03:33.925897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.529 #28 NEW cov: 12473 ft: 15194 corp: 26/520b lim: 25 exec/s: 28 rss: 74Mb L: 14/25 MS: 1 ChangeByte- 00:07:29.529 [2024-11-20 07:03:33.975931] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:29.529 [2024-11-20 07:03:33.975969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.529 [2024-11-20 07:03:33.976118] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:29.529 [2024-11-20 07:03:33.976139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.529 #29 NEW cov: 12473 ft: 15216 corp: 27/533b lim: 25 exec/s: 29 rss: 74Mb L: 13/25 MS: 1 EraseBytes- 00:07:29.529 [2024-11-20 07:03:34.026378] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:29.529 [2024-11-20 07:03:34.026414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.529 [2024-11-20 07:03:34.026522] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:29.530 [2024-11-20 07:03:34.026544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.530 [2024-11-20 07:03:34.026684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:29.530 [2024-11-20 07:03:34.026711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.530 [2024-11-20 07:03:34.026852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:29.530 [2024-11-20 07:03:34.026876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.530 #30 NEW cov: 12473 ft: 15234 corp: 28/554b lim: 25 exec/s: 30 rss: 75Mb L: 21/25 MS: 1 ShuffleBytes- 00:07:29.788 [2024-11-20 07:03:34.096875] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:29.788 [2024-11-20 07:03:34.096910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.788 [2024-11-20 07:03:34.097017] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:29.788 [2024-11-20 07:03:34.097043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.788 [2024-11-20 07:03:34.097180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:29.788 [2024-11-20 07:03:34.097205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.788 [2024-11-20 07:03:34.097338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:29.788 [2024-11-20 07:03:34.097361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.788 [2024-11-20 07:03:34.097511] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:29.788 [2024-11-20 07:03:34.097532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:29.788 #31 NEW cov: 12473 ft: 15248 corp: 29/579b lim: 25 exec/s: 31 rss: 75Mb L: 25/25 MS: 1 ChangeBinInt- 00:07:29.788 [2024-11-20 07:03:34.166990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:29.789 [2024-11-20 07:03:34.167023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.789 [2024-11-20 07:03:34.167105] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:29.789 [2024-11-20 07:03:34.167129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.789 [2024-11-20 07:03:34.167267] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:29.789 [2024-11-20 07:03:34.167294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.789 [2024-11-20 07:03:34.167436] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:29.789 [2024-11-20 07:03:34.167462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.789 #32 pulse cov: 12473 ft: 15264 corp: 29/579b lim: 25 exec/s: 16 rss: 75Mb 00:07:29.789 #32 NEW cov: 12473 ft: 15264 corp: 30/599b lim: 25 exec/s: 16 rss: 75Mb L: 20/25 MS: 1 EraseBytes- 00:07:29.789 #32 DONE cov: 12473 ft: 15264 corp: 30/599b lim: 25 exec/s: 16 rss: 75Mb 00:07:29.789 Done 32 runs in 2 second(s) 00:07:29.789 07:03:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:07:29.789 07:03:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:29.789 07:03:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:29.789 07:03:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:07:29.789 07:03:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:07:29.789 07:03:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:29.789 07:03:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:29.789 07:03:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:29.789 07:03:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:07:29.789 07:03:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:29.789 07:03:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:29.789 07:03:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:07:29.789 07:03:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:07:29.789 07:03:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:29.789 07:03:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:07:29.789 07:03:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:29.789 07:03:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:29.789 07:03:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:29.789 07:03:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:07:30.047 [2024-11-20 07:03:34.363082] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:07:30.047 [2024-11-20 07:03:34.363150] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3785812 ] 00:07:30.047 [2024-11-20 07:03:34.550563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.047 [2024-11-20 07:03:34.588172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.307 [2024-11-20 07:03:34.647865] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.307 [2024-11-20 07:03:34.664170] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:07:30.307 INFO: Running with entropic power schedule (0xFF, 100). 00:07:30.307 INFO: Seed: 165210501 00:07:30.307 INFO: Loaded 1 modules (388174 inline 8-bit counters): 388174 [0x2c4d34c, 0x2cabf9a), 00:07:30.307 INFO: Loaded 1 PC tables (388174 PCs): 388174 [0x2cabfa0,0x3298480), 00:07:30.307 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:30.307 INFO: A corpus is not provided, starting from an empty corpus 00:07:30.307 #2 INITED exec/s: 0 rss: 65Mb 00:07:30.307 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:30.307 This may also happen if the target rejected all inputs we tried so far 00:07:30.307 [2024-11-20 07:03:34.719937] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.307 [2024-11-20 07:03:34.719966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.307 [2024-11-20 07:03:34.720004] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.307 [2024-11-20 07:03:34.720019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.307 [2024-11-20 07:03:34.720073] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.307 [2024-11-20 07:03:34.720092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.307 [2024-11-20 07:03:34.720146] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.307 [2024-11-20 07:03:34.720161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.565 NEW_FUNC[1/717]: 0x467728 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:07:30.565 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:30.565 #23 NEW cov: 12316 ft: 12314 corp: 2/99b lim: 100 exec/s: 0 rss: 73Mb L: 98/98 MS: 1 InsertRepeatedBytes- 00:07:30.565 [2024-11-20 07:03:35.040258] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:275973292032000 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.565 [2024-11-20 07:03:35.040289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.565 #32 NEW cov: 12431 ft: 13852 corp: 3/119b lim: 100 exec/s: 0 rss: 73Mb L: 20/98 MS: 4 CopyPart-CrossOver-ChangeBinInt-InsertByte- 00:07:30.565 [2024-11-20 07:03:35.080654] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.565 [2024-11-20 07:03:35.080683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.565 [2024-11-20 07:03:35.080732] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.565 [2024-11-20 07:03:35.080748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.565 [2024-11-20 07:03:35.080800] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.565 [2024-11-20 07:03:35.080814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.565 [2024-11-20 07:03:35.080867] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.565 [2024-11-20 07:03:35.080883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.824 #33 NEW cov: 12437 ft: 14108 corp: 4/217b lim: 100 exec/s: 0 rss: 74Mb L: 98/98 MS: 1 ShuffleBytes- 00:07:30.824 [2024-11-20 07:03:35.140416] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:275973292032000 len:53253 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.824 [2024-11-20 07:03:35.140444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.824 #34 NEW cov: 12522 ft: 14360 corp: 5/245b lim: 100 exec/s: 0 rss: 74Mb L: 28/98 MS: 1 CMP- DE: ",\353\320\004\000\000\000\000"- 00:07:30.824 [2024-11-20 07:03:35.200865] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.824 [2024-11-20 07:03:35.200893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.824 [2024-11-20 07:03:35.200930] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.824 [2024-11-20 07:03:35.200946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.824 [2024-11-20 07:03:35.200999] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.824 [2024-11-20 07:03:35.201017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.824 #35 NEW cov: 12522 ft: 14738 corp: 6/322b lim: 100 exec/s: 0 rss: 74Mb L: 77/98 MS: 1 InsertRepeatedBytes- 00:07:30.824 [2024-11-20 07:03:35.240845] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15842497848603782379 len:56284 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.824 [2024-11-20 07:03:35.240873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.824 [2024-11-20 07:03:35.240910] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:15842497851538791387 len:56284 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.824 [2024-11-20 07:03:35.240925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.824 #39 NEW cov: 12522 ft: 15086 corp: 7/363b lim: 100 exec/s: 0 rss: 74Mb L: 41/98 MS: 4 PersAutoDict-EraseBytes-CopyPart-InsertRepeatedBytes- DE: ",\353\320\004\000\000\000\000"- 00:07:30.824 [2024-11-20 07:03:35.281223] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.824 [2024-11-20 07:03:35.281250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.824 [2024-11-20 07:03:35.281304] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.824 [2024-11-20 07:03:35.281320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.824 [2024-11-20 07:03:35.281373] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.824 [2024-11-20 07:03:35.281389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.824 [2024-11-20 07:03:35.281443] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.824 [2024-11-20 07:03:35.281458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.824 #40 NEW cov: 12522 ft: 15216 corp: 8/461b lim: 100 exec/s: 0 rss: 74Mb L: 98/98 MS: 1 CMP- DE: "\021\000\000\000\000\000\000\000"- 00:07:30.824 [2024-11-20 07:03:35.321178] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.824 [2024-11-20 07:03:35.321205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.824 [2024-11-20 07:03:35.321241] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.824 [2024-11-20 07:03:35.321256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.824 [2024-11-20 07:03:35.321310] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.824 [2024-11-20 07:03:35.321325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.824 #41 NEW cov: 12522 ft: 15248 corp: 9/539b lim: 100 exec/s: 0 rss: 74Mb L: 78/98 MS: 1 InsertByte- 00:07:31.083 [2024-11-20 07:03:35.381546] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.083 [2024-11-20 07:03:35.381574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.083 [2024-11-20 07:03:35.381621] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.083 [2024-11-20 07:03:35.381640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.083 [2024-11-20 07:03:35.381695] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:1375731712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.083 [2024-11-20 07:03:35.381711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.083 [2024-11-20 07:03:35.381767] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.083 [2024-11-20 07:03:35.381783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.083 #42 NEW cov: 12522 ft: 15259 corp: 10/624b lim: 100 exec/s: 0 rss: 74Mb L: 85/98 MS: 1 CMP- DE: "\001\214\330\022\014\364BR"- 00:07:31.083 [2024-11-20 07:03:35.421608] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.083 [2024-11-20 07:03:35.421636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.083 [2024-11-20 07:03:35.421688] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.083 [2024-11-20 07:03:35.421704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.083 [2024-11-20 07:03:35.421758] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.083 [2024-11-20 07:03:35.421773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.083 [2024-11-20 07:03:35.421828] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.083 [2024-11-20 07:03:35.421843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.083 #43 NEW cov: 12522 ft: 15351 corp: 11/722b lim: 100 exec/s: 0 rss: 74Mb L: 98/98 MS: 1 ShuffleBytes- 00:07:31.083 [2024-11-20 07:03:35.481517] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.083 [2024-11-20 07:03:35.481545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.083 [2024-11-20 07:03:35.481604] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.083 [2024-11-20 07:03:35.481620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.083 #47 NEW cov: 12522 ft: 15384 corp: 12/778b lim: 100 exec/s: 0 rss: 74Mb L: 56/98 MS: 4 EraseBytes-EraseBytes-CopyPart-InsertRepeatedBytes- 00:07:31.083 [2024-11-20 07:03:35.541981] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.083 [2024-11-20 07:03:35.542011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.083 [2024-11-20 07:03:35.542070] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.083 [2024-11-20 07:03:35.542088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.083 [2024-11-20 07:03:35.542141] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.083 [2024-11-20 07:03:35.542160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.083 [2024-11-20 07:03:35.542214] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.083 [2024-11-20 07:03:35.542230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.083 #48 NEW cov: 12522 ft: 15412 corp: 13/863b lim: 100 exec/s: 0 rss: 74Mb L: 85/98 MS: 1 InsertRepeatedBytes- 00:07:31.083 [2024-11-20 07:03:35.602121] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.083 [2024-11-20 07:03:35.602150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.083 [2024-11-20 07:03:35.602196] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.083 [2024-11-20 07:03:35.602212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.083 [2024-11-20 07:03:35.602265] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:15617 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.083 [2024-11-20 07:03:35.602279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.083 [2024-11-20 07:03:35.602336] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.083 [2024-11-20 07:03:35.602351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.342 NEW_FUNC[1/1]: 0x1c34f28 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:31.342 #49 NEW cov: 12545 ft: 15460 corp: 14/948b lim: 100 exec/s: 0 rss: 74Mb L: 85/98 MS: 1 ChangeByte- 00:07:31.343 [2024-11-20 07:03:35.662161] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13672292666396491197 len:48574 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.343 [2024-11-20 07:03:35.662189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.343 [2024-11-20 07:03:35.662243] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13672292666396491197 len:48574 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.343 [2024-11-20 07:03:35.662260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.343 [2024-11-20 07:03:35.662314] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13672292666396491197 len:48574 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.343 [2024-11-20 07:03:35.662330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.343 #52 NEW cov: 12545 ft: 15538 corp: 15/1016b lim: 100 exec/s: 0 rss: 74Mb L: 68/98 MS: 3 ChangeBit-ChangeBit-InsertRepeatedBytes- 00:07:31.343 [2024-11-20 07:03:35.702428] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.343 [2024-11-20 07:03:35.702458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.343 [2024-11-20 07:03:35.702506] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.343 [2024-11-20 07:03:35.702521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.343 [2024-11-20 07:03:35.702576] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.343 [2024-11-20 07:03:35.702601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.343 [2024-11-20 07:03:35.702658] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.343 [2024-11-20 07:03:35.702673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.343 #53 NEW cov: 12545 ft: 15580 corp: 16/1101b lim: 100 exec/s: 53 rss: 74Mb L: 85/98 MS: 1 ChangeASCIIInt- 00:07:31.343 [2024-11-20 07:03:35.742370] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.343 [2024-11-20 07:03:35.742400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.343 [2024-11-20 07:03:35.742436] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.343 [2024-11-20 07:03:35.742451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.343 [2024-11-20 07:03:35.742505] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:61 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.343 [2024-11-20 07:03:35.742521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.343 #54 NEW cov: 12545 ft: 15627 corp: 17/1179b lim: 100 exec/s: 54 rss: 74Mb L: 78/98 MS: 1 ChangeByte- 00:07:31.343 [2024-11-20 07:03:35.782669] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.343 [2024-11-20 07:03:35.782698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.343 [2024-11-20 07:03:35.782744] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.343 [2024-11-20 07:03:35.782760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.343 [2024-11-20 07:03:35.782815] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.343 [2024-11-20 07:03:35.782831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.343 [2024-11-20 07:03:35.782887] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.343 [2024-11-20 07:03:35.782902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.343 #55 NEW cov: 12545 ft: 15634 corp: 18/1278b lim: 100 exec/s: 55 rss: 75Mb L: 99/99 MS: 1 CopyPart- 00:07:31.343 [2024-11-20 07:03:35.842816] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.343 [2024-11-20 07:03:35.842845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.343 [2024-11-20 07:03:35.842893] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.343 [2024-11-20 07:03:35.842908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.343 [2024-11-20 07:03:35.842961] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.343 [2024-11-20 07:03:35.842977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.343 [2024-11-20 07:03:35.843034] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.343 [2024-11-20 07:03:35.843050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.343 #56 NEW cov: 12545 ft: 15645 corp: 19/1364b lim: 100 exec/s: 56 rss: 75Mb L: 86/99 MS: 1 InsertByte- 00:07:31.343 [2024-11-20 07:03:35.882819] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1300414395876216024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.343 [2024-11-20 07:03:35.882848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.343 [2024-11-20 07:03:35.882884] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.343 [2024-11-20 07:03:35.882899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.343 [2024-11-20 07:03:35.882955] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.343 [2024-11-20 07:03:35.882970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.602 #60 NEW cov: 12545 ft: 15663 corp: 20/1430b lim: 100 exec/s: 60 rss: 75Mb L: 66/99 MS: 4 ChangeBit-ChangeBinInt-PersAutoDict-InsertRepeatedBytes- DE: "\001\214\330\022\014\364BR"- 00:07:31.603 [2024-11-20 07:03:35.922915] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13672292666396491197 len:48574 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.603 [2024-11-20 07:03:35.922945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.603 [2024-11-20 07:03:35.922980] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13672292666396491197 len:48574 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.603 [2024-11-20 07:03:35.922996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.603 [2024-11-20 07:03:35.923050] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13672292666396491197 len:48574 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.603 [2024-11-20 07:03:35.923066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.603 #61 NEW cov: 12545 ft: 15689 corp: 21/1499b lim: 100 exec/s: 61 rss: 75Mb L: 69/99 MS: 1 InsertByte- 00:07:31.603 [2024-11-20 07:03:35.983228] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.603 [2024-11-20 07:03:35.983257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.603 [2024-11-20 07:03:35.983295] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:281470681743360 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.603 [2024-11-20 07:03:35.983311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.603 [2024-11-20 07:03:35.983365] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.603 [2024-11-20 07:03:35.983381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.603 [2024-11-20 07:03:35.983436] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.603 [2024-11-20 07:03:35.983452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.603 #62 NEW cov: 12545 ft: 15707 corp: 22/1587b lim: 100 exec/s: 62 rss: 75Mb L: 88/99 MS: 1 InsertRepeatedBytes- 00:07:31.603 [2024-11-20 07:03:36.023196] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1300414395876216024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.603 [2024-11-20 07:03:36.023224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.603 [2024-11-20 07:03:36.023260] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.603 [2024-11-20 07:03:36.023275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.603 [2024-11-20 07:03:36.023326] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.603 [2024-11-20 07:03:36.023342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.603 #63 NEW cov: 12545 ft: 15733 corp: 23/1654b lim: 100 exec/s: 63 rss: 75Mb L: 67/99 MS: 1 InsertByte- 00:07:31.603 [2024-11-20 07:03:36.083527] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.603 [2024-11-20 07:03:36.083555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.603 [2024-11-20 07:03:36.083606] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.603 [2024-11-20 07:03:36.083622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.603 [2024-11-20 07:03:36.083677] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.603 [2024-11-20 07:03:36.083692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.603 [2024-11-20 07:03:36.083747] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.603 [2024-11-20 07:03:36.083762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.603 #64 NEW cov: 12545 ft: 15742 corp: 24/1752b lim: 100 exec/s: 64 rss: 75Mb L: 98/99 MS: 1 PersAutoDict- DE: ",\353\320\004\000\000\000\000"- 00:07:31.603 [2024-11-20 07:03:36.123478] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1300414395876216024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.603 [2024-11-20 07:03:36.123505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.603 [2024-11-20 07:03:36.123562] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.603 [2024-11-20 07:03:36.123578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.603 [2024-11-20 07:03:36.123637] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.603 [2024-11-20 07:03:36.123653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.603 #70 NEW cov: 12545 ft: 15745 corp: 25/1818b lim: 100 exec/s: 70 rss: 75Mb L: 66/99 MS: 1 CMP- DE: "U\2106{\022\330\214\000"- 00:07:31.861 [2024-11-20 07:03:36.163581] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13672292662101523901 len:48574 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.861 [2024-11-20 07:03:36.163615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.861 [2024-11-20 07:03:36.163671] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13672292666396491197 len:48574 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.861 [2024-11-20 07:03:36.163688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.861 [2024-11-20 07:03:36.163744] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13672292666396491197 len:48574 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.861 [2024-11-20 07:03:36.163760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.861 #71 NEW cov: 12545 ft: 15770 corp: 26/1887b lim: 100 exec/s: 71 rss: 75Mb L: 69/99 MS: 1 ChangeBit- 00:07:31.861 [2024-11-20 07:03:36.223576] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.861 [2024-11-20 07:03:36.223610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.861 [2024-11-20 07:03:36.223647] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.861 [2024-11-20 07:03:36.223661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.861 #72 NEW cov: 12545 ft: 15837 corp: 27/1943b lim: 100 exec/s: 72 rss: 75Mb L: 56/99 MS: 1 ShuffleBytes- 00:07:31.861 [2024-11-20 07:03:36.284083] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.861 [2024-11-20 07:03:36.284114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.861 [2024-11-20 07:03:36.284161] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.861 [2024-11-20 07:03:36.284177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.862 [2024-11-20 07:03:36.284231] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:2561 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.862 [2024-11-20 07:03:36.284246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.862 [2024-11-20 07:03:36.284302] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.862 [2024-11-20 07:03:36.284317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.862 #73 NEW cov: 12545 ft: 15855 corp: 28/2028b lim: 100 exec/s: 73 rss: 75Mb L: 85/99 MS: 1 CrossOver- 00:07:31.862 [2024-11-20 07:03:36.324215] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.862 [2024-11-20 07:03:36.324243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.862 [2024-11-20 07:03:36.324290] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:47278999994368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.862 [2024-11-20 07:03:36.324305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.862 [2024-11-20 07:03:36.324359] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.862 [2024-11-20 07:03:36.324374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.862 [2024-11-20 07:03:36.324429] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.862 [2024-11-20 07:03:36.324442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.862 #74 NEW cov: 12545 ft: 15859 corp: 29/2113b lim: 100 exec/s: 74 rss: 75Mb L: 85/99 MS: 1 ChangeByte- 00:07:31.862 [2024-11-20 07:03:36.363993] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1300414395876216024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.862 [2024-11-20 07:03:36.364020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.862 [2024-11-20 07:03:36.364056] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:86 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.862 [2024-11-20 07:03:36.364071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.862 #75 NEW cov: 12545 ft: 15875 corp: 30/2154b lim: 100 exec/s: 75 rss: 75Mb L: 41/99 MS: 1 EraseBytes- 00:07:32.121 [2024-11-20 07:03:36.424464] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.121 [2024-11-20 07:03:36.424491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.121 [2024-11-20 07:03:36.424560] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.121 [2024-11-20 07:03:36.424576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.121 [2024-11-20 07:03:36.424633] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:12898309332789100544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.121 [2024-11-20 07:03:36.424648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.121 [2024-11-20 07:03:36.424699] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.121 [2024-11-20 07:03:36.424715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.121 #76 NEW cov: 12545 ft: 15915 corp: 31/2252b lim: 100 exec/s: 76 rss: 75Mb L: 98/99 MS: 1 ChangeByte- 00:07:32.121 [2024-11-20 07:03:36.464590] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.121 [2024-11-20 07:03:36.464622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.121 [2024-11-20 07:03:36.464677] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.121 [2024-11-20 07:03:36.464691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.121 [2024-11-20 07:03:36.464744] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.121 [2024-11-20 07:03:36.464759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.121 [2024-11-20 07:03:36.464812] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.121 [2024-11-20 07:03:36.464827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.121 #77 NEW cov: 12545 ft: 15936 corp: 32/2338b lim: 100 exec/s: 77 rss: 75Mb L: 86/99 MS: 1 InsertByte- 00:07:32.121 [2024-11-20 07:03:36.524742] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.121 [2024-11-20 07:03:36.524768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.121 [2024-11-20 07:03:36.524835] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.121 [2024-11-20 07:03:36.524851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.121 [2024-11-20 07:03:36.524903] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:15617 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.121 [2024-11-20 07:03:36.524918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.121 [2024-11-20 07:03:36.524970] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.121 [2024-11-20 07:03:36.524986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.121 #78 NEW cov: 12545 ft: 15947 corp: 33/2423b lim: 100 exec/s: 78 rss: 75Mb L: 85/99 MS: 1 PersAutoDict- DE: "\021\000\000\000\000\000\000\000"- 00:07:32.121 [2024-11-20 07:03:36.584923] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.121 [2024-11-20 07:03:36.584950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.121 [2024-11-20 07:03:36.585010] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.121 [2024-11-20 07:03:36.585026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.121 [2024-11-20 07:03:36.585078] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.121 [2024-11-20 07:03:36.585092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.121 [2024-11-20 07:03:36.585144] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.121 [2024-11-20 07:03:36.585158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.121 #79 NEW cov: 12545 ft: 15958 corp: 34/2522b lim: 100 exec/s: 79 rss: 75Mb L: 99/99 MS: 1 InsertByte- 00:07:32.121 [2024-11-20 07:03:36.625044] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:4806 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.121 [2024-11-20 07:03:36.625072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.121 [2024-11-20 07:03:36.625134] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.121 [2024-11-20 07:03:36.625150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.121 [2024-11-20 07:03:36.625203] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.121 [2024-11-20 07:03:36.625218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.122 [2024-11-20 07:03:36.625272] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.122 [2024-11-20 07:03:36.625286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.122 #80 NEW cov: 12545 ft: 15966 corp: 35/2607b lim: 100 exec/s: 80 rss: 75Mb L: 85/99 MS: 1 CMP- DE: "\000\214\330\022\305\250U\012"- 00:07:32.122 [2024-11-20 07:03:36.665167] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.122 [2024-11-20 07:03:36.665194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.122 [2024-11-20 07:03:36.665259] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.122 [2024-11-20 07:03:36.665274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.122 [2024-11-20 07:03:36.665327] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:4352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.122 [2024-11-20 07:03:36.665341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.122 [2024-11-20 07:03:36.665393] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.122 [2024-11-20 07:03:36.665407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.381 #81 NEW cov: 12545 ft: 15986 corp: 36/2705b lim: 100 exec/s: 40 rss: 75Mb L: 98/99 MS: 1 CopyPart- 00:07:32.381 #81 DONE cov: 12545 ft: 15986 corp: 36/2705b lim: 100 exec/s: 40 rss: 75Mb 00:07:32.381 ###### Recommended dictionary. ###### 00:07:32.381 ",\353\320\004\000\000\000\000" # Uses: 3 00:07:32.381 "\021\000\000\000\000\000\000\000" # Uses: 1 00:07:32.381 "\001\214\330\022\014\364BR" # Uses: 1 00:07:32.381 "U\2106{\022\330\214\000" # Uses: 0 00:07:32.381 "\000\214\330\022\305\250U\012" # Uses: 0 00:07:32.381 ###### End of recommended dictionary. ###### 00:07:32.381 Done 81 runs in 2 second(s) 00:07:32.381 07:03:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:07:32.381 07:03:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:32.381 07:03:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:32.381 07:03:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:07:32.381 00:07:32.381 real 1m4.584s 00:07:32.381 user 1m40.667s 00:07:32.381 sys 0m7.904s 00:07:32.381 07:03:36 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:32.381 07:03:36 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:32.381 ************************************ 00:07:32.381 END TEST nvmf_llvm_fuzz 00:07:32.381 ************************************ 00:07:32.381 07:03:36 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:07:32.381 07:03:36 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:07:32.381 07:03:36 llvm_fuzz -- fuzz/llvm.sh@20 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:32.381 07:03:36 llvm_fuzz -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:32.381 07:03:36 llvm_fuzz -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:32.381 07:03:36 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:32.381 ************************************ 00:07:32.381 START TEST vfio_llvm_fuzz 00:07:32.381 ************************************ 00:07:32.381 07:03:36 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:32.643 * Looking for test storage... 00:07:32.643 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:32.643 07:03:36 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:32.643 07:03:36 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:07:32.643 07:03:36 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:32.643 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:32.643 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:32.643 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:32.643 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:32.643 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:07:32.643 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:07:32.643 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:07:32.643 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:07:32.643 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:07:32.643 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:07:32.643 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:07:32.643 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:32.643 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:07:32.643 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:07:32.643 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:32.643 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:32.643 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:07:32.643 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:07:32.643 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:32.643 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:07:32.643 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:07:32.643 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:07:32.643 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:07:32.643 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:32.643 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:07:32.643 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:07:32.643 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:32.643 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:32.643 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:07:32.643 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:32.643 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:32.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.643 --rc genhtml_branch_coverage=1 00:07:32.643 --rc genhtml_function_coverage=1 00:07:32.643 --rc genhtml_legend=1 00:07:32.643 --rc geninfo_all_blocks=1 00:07:32.643 --rc geninfo_unexecuted_blocks=1 00:07:32.643 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:32.643 ' 00:07:32.643 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:32.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.644 --rc genhtml_branch_coverage=1 00:07:32.644 --rc genhtml_function_coverage=1 00:07:32.644 --rc genhtml_legend=1 00:07:32.644 --rc geninfo_all_blocks=1 00:07:32.644 --rc geninfo_unexecuted_blocks=1 00:07:32.644 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:32.644 ' 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:32.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.644 --rc genhtml_branch_coverage=1 00:07:32.644 --rc genhtml_function_coverage=1 00:07:32.644 --rc genhtml_legend=1 00:07:32.644 --rc geninfo_all_blocks=1 00:07:32.644 --rc geninfo_unexecuted_blocks=1 00:07:32.644 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:32.644 ' 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:32.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.644 --rc genhtml_branch_coverage=1 00:07:32.644 --rc genhtml_function_coverage=1 00:07:32.644 --rc genhtml_legend=1 00:07:32.644 --rc geninfo_all_blocks=1 00:07:32.644 --rc geninfo_unexecuted_blocks=1 00:07:32.644 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:32.644 ' 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_CET=n 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FUZZER=y 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_SHARED=n 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_FC=n 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@90 -- # CONFIG_URING=n 00:07:32.644 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:32.645 #define SPDK_CONFIG_H 00:07:32.645 #define SPDK_CONFIG_AIO_FSDEV 1 00:07:32.645 #define SPDK_CONFIG_APPS 1 00:07:32.645 #define SPDK_CONFIG_ARCH native 00:07:32.645 #undef SPDK_CONFIG_ASAN 00:07:32.645 #undef SPDK_CONFIG_AVAHI 00:07:32.645 #undef SPDK_CONFIG_CET 00:07:32.645 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:07:32.645 #define SPDK_CONFIG_COVERAGE 1 00:07:32.645 #define SPDK_CONFIG_CROSS_PREFIX 00:07:32.645 #undef SPDK_CONFIG_CRYPTO 00:07:32.645 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:32.645 #undef SPDK_CONFIG_CUSTOMOCF 00:07:32.645 #undef SPDK_CONFIG_DAOS 00:07:32.645 #define SPDK_CONFIG_DAOS_DIR 00:07:32.645 #define SPDK_CONFIG_DEBUG 1 00:07:32.645 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:32.645 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:32.645 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:32.645 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:32.645 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:32.645 #undef SPDK_CONFIG_DPDK_UADK 00:07:32.645 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:32.645 #define SPDK_CONFIG_EXAMPLES 1 00:07:32.645 #undef SPDK_CONFIG_FC 00:07:32.645 #define SPDK_CONFIG_FC_PATH 00:07:32.645 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:32.645 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:32.645 #define SPDK_CONFIG_FSDEV 1 00:07:32.645 #undef SPDK_CONFIG_FUSE 00:07:32.645 #define SPDK_CONFIG_FUZZER 1 00:07:32.645 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:32.645 #undef SPDK_CONFIG_GOLANG 00:07:32.645 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:32.645 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:32.645 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:32.645 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:32.645 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:32.645 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:32.645 #undef SPDK_CONFIG_HAVE_LZ4 00:07:32.645 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:07:32.645 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:07:32.645 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:32.645 #define SPDK_CONFIG_IDXD 1 00:07:32.645 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:32.645 #undef SPDK_CONFIG_IPSEC_MB 00:07:32.645 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:32.645 #define SPDK_CONFIG_ISAL 1 00:07:32.645 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:32.645 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:32.645 #define SPDK_CONFIG_LIBDIR 00:07:32.645 #undef SPDK_CONFIG_LTO 00:07:32.645 #define SPDK_CONFIG_MAX_LCORES 128 00:07:32.645 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:07:32.645 #define SPDK_CONFIG_NVME_CUSE 1 00:07:32.645 #undef SPDK_CONFIG_OCF 00:07:32.645 #define SPDK_CONFIG_OCF_PATH 00:07:32.645 #define SPDK_CONFIG_OPENSSL_PATH 00:07:32.645 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:32.645 #define SPDK_CONFIG_PGO_DIR 00:07:32.645 #undef SPDK_CONFIG_PGO_USE 00:07:32.645 #define SPDK_CONFIG_PREFIX /usr/local 00:07:32.645 #undef SPDK_CONFIG_RAID5F 00:07:32.645 #undef SPDK_CONFIG_RBD 00:07:32.645 #define SPDK_CONFIG_RDMA 1 00:07:32.645 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:32.645 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:32.645 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:32.645 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:32.645 #undef SPDK_CONFIG_SHARED 00:07:32.645 #undef SPDK_CONFIG_SMA 00:07:32.645 #define SPDK_CONFIG_TESTS 1 00:07:32.645 #undef SPDK_CONFIG_TSAN 00:07:32.645 #define SPDK_CONFIG_UBLK 1 00:07:32.645 #define SPDK_CONFIG_UBSAN 1 00:07:32.645 #undef SPDK_CONFIG_UNIT_TESTS 00:07:32.645 #undef SPDK_CONFIG_URING 00:07:32.645 #define SPDK_CONFIG_URING_PATH 00:07:32.645 #undef SPDK_CONFIG_URING_ZNS 00:07:32.645 #undef SPDK_CONFIG_USDT 00:07:32.645 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:32.645 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:32.645 #define SPDK_CONFIG_VFIO_USER 1 00:07:32.645 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:32.645 #define SPDK_CONFIG_VHOST 1 00:07:32.645 #define SPDK_CONFIG_VIRTIO 1 00:07:32.645 #undef SPDK_CONFIG_VTUNE 00:07:32.645 #define SPDK_CONFIG_VTUNE_DIR 00:07:32.645 #define SPDK_CONFIG_WERROR 1 00:07:32.645 #define SPDK_CONFIG_WPDK_DIR 00:07:32.645 #undef SPDK_CONFIG_XNVME 00:07:32.645 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:32.645 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:32.646 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@204 -- # cat 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV= 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ 1 -eq 1 ]] 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV=1 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@273 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@277 -- # export valgrind= 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@277 -- # valgrind= 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@283 -- # uname -s 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # MAKE=make 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j112 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@307 -- # TEST_MODE= 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@329 -- # [[ -z 3786187 ]] 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@329 -- # kill -0 3786187 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@342 -- # local mount target_dir 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:07:32.647 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.gjj7go 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.gjj7go/tests/vfio /tmp/spdk.gjj7go 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # df -T 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=54014259200 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=61730607104 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=7716347904 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=30861873152 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=30865301504 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=3428352 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=12340121600 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=12346122240 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=6000640 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=30864785408 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=30865305600 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=520192 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=6173044736 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=6173057024 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:07:32.908 * Looking for test storage... 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@379 -- # local target_space new_size 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # mount=/ 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # target_space=54014259200 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@392 -- # new_size=9930940416 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:32.908 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # return 0 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1678 -- # set -o errtrace 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1683 -- # true 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1685 -- # xtrace_fd 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:32.908 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:32.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.909 --rc genhtml_branch_coverage=1 00:07:32.909 --rc genhtml_function_coverage=1 00:07:32.909 --rc genhtml_legend=1 00:07:32.909 --rc geninfo_all_blocks=1 00:07:32.909 --rc geninfo_unexecuted_blocks=1 00:07:32.909 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:32.909 ' 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:32.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.909 --rc genhtml_branch_coverage=1 00:07:32.909 --rc genhtml_function_coverage=1 00:07:32.909 --rc genhtml_legend=1 00:07:32.909 --rc geninfo_all_blocks=1 00:07:32.909 --rc geninfo_unexecuted_blocks=1 00:07:32.909 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:32.909 ' 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:32.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.909 --rc genhtml_branch_coverage=1 00:07:32.909 --rc genhtml_function_coverage=1 00:07:32.909 --rc genhtml_legend=1 00:07:32.909 --rc geninfo_all_blocks=1 00:07:32.909 --rc geninfo_unexecuted_blocks=1 00:07:32.909 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:32.909 ' 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:32.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.909 --rc genhtml_branch_coverage=1 00:07:32.909 --rc genhtml_function_coverage=1 00:07:32.909 --rc genhtml_legend=1 00:07:32.909 --rc geninfo_all_blocks=1 00:07:32.909 --rc geninfo_unexecuted_blocks=1 00:07:32.909 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:32.909 ' 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:07:32.909 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:32.909 07:03:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:07:32.909 [2024-11-20 07:03:37.380649] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:07:32.909 [2024-11-20 07:03:37.380717] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3786398 ] 00:07:32.909 [2024-11-20 07:03:37.459784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.168 [2024-11-20 07:03:37.500785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.168 INFO: Running with entropic power schedule (0xFF, 100). 00:07:33.168 INFO: Seed: 3177215182 00:07:33.168 INFO: Loaded 1 modules (385410 inline 8-bit counters): 385410 [0x2c0db4c, 0x2c6bcce), 00:07:33.168 INFO: Loaded 1 PC tables (385410 PCs): 385410 [0x2c6bcd0,0x324d4f0), 00:07:33.168 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:33.168 INFO: A corpus is not provided, starting from an empty corpus 00:07:33.168 #2 INITED exec/s: 0 rss: 66Mb 00:07:33.168 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:33.168 This may also happen if the target rejected all inputs we tried so far 00:07:33.426 [2024-11-20 07:03:37.749759] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:07:33.684 NEW_FUNC[1/674]: 0x43b5e8 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:07:33.684 NEW_FUNC[2/674]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:33.684 #11 NEW cov: 11173 ft: 10953 corp: 2/7b lim: 6 exec/s: 0 rss: 73Mb L: 6/6 MS: 4 ChangeBit-ChangeBit-InsertRepeatedBytes-CopyPart- 00:07:33.942 #13 NEW cov: 11197 ft: 14163 corp: 3/13b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 2 CrossOver-InsertByte- 00:07:33.942 #14 NEW cov: 11197 ft: 15344 corp: 4/19b lim: 6 exec/s: 0 rss: 75Mb L: 6/6 MS: 1 ChangeByte- 00:07:33.942 #15 NEW cov: 11197 ft: 16135 corp: 5/25b lim: 6 exec/s: 0 rss: 75Mb L: 6/6 MS: 1 ChangeBit- 00:07:34.201 NEW_FUNC[1/1]: 0x1c01378 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:34.201 #16 NEW cov: 11214 ft: 17497 corp: 6/31b lim: 6 exec/s: 0 rss: 76Mb L: 6/6 MS: 1 ChangeBinInt- 00:07:34.201 #17 NEW cov: 11214 ft: 17586 corp: 7/37b lim: 6 exec/s: 17 rss: 76Mb L: 6/6 MS: 1 ShuffleBytes- 00:07:34.460 #23 NEW cov: 11214 ft: 17951 corp: 8/43b lim: 6 exec/s: 23 rss: 76Mb L: 6/6 MS: 1 ShuffleBytes- 00:07:34.460 #24 NEW cov: 11214 ft: 18021 corp: 9/49b lim: 6 exec/s: 24 rss: 76Mb L: 6/6 MS: 1 ChangeByte- 00:07:34.718 #25 NEW cov: 11214 ft: 18267 corp: 10/55b lim: 6 exec/s: 25 rss: 76Mb L: 6/6 MS: 1 CopyPart- 00:07:34.718 #26 NEW cov: 11214 ft: 18440 corp: 11/61b lim: 6 exec/s: 26 rss: 76Mb L: 6/6 MS: 1 ShuffleBytes- 00:07:34.976 #32 NEW cov: 11214 ft: 18529 corp: 12/67b lim: 6 exec/s: 32 rss: 76Mb L: 6/6 MS: 1 ChangeByte- 00:07:34.976 #46 NEW cov: 11214 ft: 18597 corp: 13/73b lim: 6 exec/s: 46 rss: 76Mb L: 6/6 MS: 4 InsertRepeatedBytes-ChangeByte-InsertByte-CrossOver- 00:07:35.235 #47 NEW cov: 11221 ft: 18630 corp: 14/79b lim: 6 exec/s: 47 rss: 76Mb L: 6/6 MS: 1 ChangeByte- 00:07:35.235 #49 NEW cov: 11221 ft: 18690 corp: 15/85b lim: 6 exec/s: 49 rss: 76Mb L: 6/6 MS: 2 CopyPart-CrossOver- 00:07:35.493 #50 NEW cov: 11221 ft: 18708 corp: 16/91b lim: 6 exec/s: 25 rss: 76Mb L: 6/6 MS: 1 ChangeByte- 00:07:35.493 #50 DONE cov: 11221 ft: 18708 corp: 16/91b lim: 6 exec/s: 25 rss: 76Mb 00:07:35.493 Done 50 runs in 2 second(s) 00:07:35.493 [2024-11-20 07:03:39.816788] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:07:35.493 07:03:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:07:35.493 07:03:40 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:35.493 07:03:40 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:35.493 07:03:40 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:35.493 07:03:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:07:35.493 07:03:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:35.493 07:03:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:35.493 07:03:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:35.493 07:03:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:07:35.493 07:03:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:07:35.493 07:03:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:07:35.493 07:03:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:07:35.493 07:03:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:35.493 07:03:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:35.493 07:03:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:35.750 07:03:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:07:35.750 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:35.750 07:03:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:35.750 07:03:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:35.750 07:03:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:07:35.750 [2024-11-20 07:03:40.068086] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:07:35.750 [2024-11-20 07:03:40.068141] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3786777 ] 00:07:35.750 [2024-11-20 07:03:40.147627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.750 [2024-11-20 07:03:40.188467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.009 INFO: Running with entropic power schedule (0xFF, 100). 00:07:36.009 INFO: Seed: 1564266936 00:07:36.009 INFO: Loaded 1 modules (385410 inline 8-bit counters): 385410 [0x2c0db4c, 0x2c6bcce), 00:07:36.009 INFO: Loaded 1 PC tables (385410 PCs): 385410 [0x2c6bcd0,0x324d4f0), 00:07:36.009 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:36.009 INFO: A corpus is not provided, starting from an empty corpus 00:07:36.009 #2 INITED exec/s: 0 rss: 66Mb 00:07:36.009 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:36.009 This may also happen if the target rejected all inputs we tried so far 00:07:36.009 [2024-11-20 07:03:40.428315] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:07:36.009 [2024-11-20 07:03:40.479640] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:36.009 [2024-11-20 07:03:40.479664] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:36.009 [2024-11-20 07:03:40.479709] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:36.526 NEW_FUNC[1/676]: 0x43bb88 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:07:36.526 NEW_FUNC[2/676]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:36.526 #7 NEW cov: 11173 ft: 10875 corp: 2/5b lim: 4 exec/s: 0 rss: 75Mb L: 4/4 MS: 5 CrossOver-CopyPart-ChangeByte-InsertByte-InsertByte- 00:07:36.526 [2024-11-20 07:03:40.948114] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:36.526 [2024-11-20 07:03:40.948146] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:36.526 [2024-11-20 07:03:40.948165] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:36.526 #13 NEW cov: 11187 ft: 14114 corp: 3/9b lim: 4 exec/s: 0 rss: 76Mb L: 4/4 MS: 1 ChangeBit- 00:07:36.785 [2024-11-20 07:03:41.146240] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:36.785 [2024-11-20 07:03:41.146263] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:36.785 [2024-11-20 07:03:41.146281] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:36.785 NEW_FUNC[1/1]: 0x1c01378 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:36.785 #24 NEW cov: 11204 ft: 15874 corp: 4/13b lim: 4 exec/s: 0 rss: 77Mb L: 4/4 MS: 1 ChangeBit- 00:07:36.785 [2024-11-20 07:03:41.334634] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:36.785 [2024-11-20 07:03:41.334656] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:36.785 [2024-11-20 07:03:41.334673] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:37.043 #25 NEW cov: 11204 ft: 16321 corp: 5/17b lim: 4 exec/s: 25 rss: 77Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:37.043 [2024-11-20 07:03:41.519644] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:37.043 [2024-11-20 07:03:41.519667] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:37.043 [2024-11-20 07:03:41.519685] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:37.302 #27 NEW cov: 11204 ft: 17139 corp: 6/21b lim: 4 exec/s: 27 rss: 77Mb L: 4/4 MS: 2 EraseBytes-CopyPart- 00:07:37.302 [2024-11-20 07:03:41.705702] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:37.302 [2024-11-20 07:03:41.705725] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:37.302 [2024-11-20 07:03:41.705742] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:37.302 #31 NEW cov: 11204 ft: 17534 corp: 7/25b lim: 4 exec/s: 31 rss: 77Mb L: 4/4 MS: 4 EraseBytes-ShuffleBytes-ChangeBinInt-CrossOver- 00:07:37.561 [2024-11-20 07:03:41.901809] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:37.561 [2024-11-20 07:03:41.901831] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:37.561 [2024-11-20 07:03:41.901848] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:37.561 #37 NEW cov: 11204 ft: 17753 corp: 8/29b lim: 4 exec/s: 37 rss: 77Mb L: 4/4 MS: 1 ChangeByte- 00:07:37.561 [2024-11-20 07:03:42.084772] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:37.561 [2024-11-20 07:03:42.084795] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:37.561 [2024-11-20 07:03:42.084812] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:37.818 #43 NEW cov: 11211 ft: 17946 corp: 9/33b lim: 4 exec/s: 43 rss: 77Mb L: 4/4 MS: 1 ChangeBinInt- 00:07:37.818 [2024-11-20 07:03:42.269111] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:37.818 [2024-11-20 07:03:42.269134] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:37.818 [2024-11-20 07:03:42.269151] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:38.076 #44 NEW cov: 11211 ft: 18243 corp: 10/37b lim: 4 exec/s: 44 rss: 77Mb L: 4/4 MS: 1 ChangeByte- 00:07:38.076 [2024-11-20 07:03:42.453760] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:38.076 [2024-11-20 07:03:42.453782] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:38.076 [2024-11-20 07:03:42.453799] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:38.076 #45 NEW cov: 11211 ft: 18310 corp: 11/41b lim: 4 exec/s: 22 rss: 77Mb L: 4/4 MS: 1 CopyPart- 00:07:38.076 #45 DONE cov: 11211 ft: 18310 corp: 11/41b lim: 4 exec/s: 22 rss: 77Mb 00:07:38.076 Done 45 runs in 2 second(s) 00:07:38.076 [2024-11-20 07:03:42.590793] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:07:38.335 07:03:42 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:07:38.335 07:03:42 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:38.335 07:03:42 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:38.335 07:03:42 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:38.335 07:03:42 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:07:38.335 07:03:42 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:38.335 07:03:42 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:38.335 07:03:42 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:38.335 07:03:42 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:07:38.335 07:03:42 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:07:38.335 07:03:42 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:07:38.335 07:03:42 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:07:38.335 07:03:42 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:38.335 07:03:42 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:38.335 07:03:42 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:38.335 07:03:42 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:07:38.335 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:38.335 07:03:42 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:38.335 07:03:42 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:38.335 07:03:42 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:07:38.335 [2024-11-20 07:03:42.866885] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:07:38.335 [2024-11-20 07:03:42.866955] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3787316 ] 00:07:38.594 [2024-11-20 07:03:42.946633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.594 [2024-11-20 07:03:42.987327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.853 INFO: Running with entropic power schedule (0xFF, 100). 00:07:38.853 INFO: Seed: 67269206 00:07:38.853 INFO: Loaded 1 modules (385410 inline 8-bit counters): 385410 [0x2c0db4c, 0x2c6bcce), 00:07:38.853 INFO: Loaded 1 PC tables (385410 PCs): 385410 [0x2c6bcd0,0x324d4f0), 00:07:38.853 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:38.853 INFO: A corpus is not provided, starting from an empty corpus 00:07:38.853 #2 INITED exec/s: 0 rss: 66Mb 00:07:38.854 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:38.854 This may also happen if the target rejected all inputs we tried so far 00:07:38.854 [2024-11-20 07:03:43.225703] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:07:38.854 [2024-11-20 07:03:43.248559] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:39.113 NEW_FUNC[1/675]: 0x43c578 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:07:39.113 NEW_FUNC[2/675]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:39.113 #19 NEW cov: 11155 ft: 11042 corp: 2/9b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 2 InsertRepeatedBytes-CopyPart- 00:07:39.113 [2024-11-20 07:03:43.656665] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:39.371 #25 NEW cov: 11169 ft: 14281 corp: 3/17b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 1 ChangeBinInt- 00:07:39.372 [2024-11-20 07:03:43.776796] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:39.372 #26 NEW cov: 11170 ft: 15497 corp: 4/25b lim: 8 exec/s: 0 rss: 75Mb L: 8/8 MS: 1 ChangeByte- 00:07:39.372 [2024-11-20 07:03:43.896821] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:39.681 #27 NEW cov: 11170 ft: 15891 corp: 5/33b lim: 8 exec/s: 0 rss: 75Mb L: 8/8 MS: 1 CopyPart- 00:07:39.681 [2024-11-20 07:03:44.007715] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:39.681 NEW_FUNC[1/1]: 0x1c01378 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:39.681 #28 NEW cov: 11187 ft: 17008 corp: 6/41b lim: 8 exec/s: 0 rss: 75Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:39.681 [2024-11-20 07:03:44.127668] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:39.681 #29 NEW cov: 11187 ft: 17483 corp: 7/49b lim: 8 exec/s: 29 rss: 75Mb L: 8/8 MS: 1 CopyPart- 00:07:39.938 [2024-11-20 07:03:44.239272] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:39.938 #30 NEW cov: 11187 ft: 17980 corp: 8/57b lim: 8 exec/s: 30 rss: 76Mb L: 8/8 MS: 1 ChangeByte- 00:07:39.938 [2024-11-20 07:03:44.360236] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:39.938 #31 NEW cov: 11187 ft: 18120 corp: 9/65b lim: 8 exec/s: 31 rss: 76Mb L: 8/8 MS: 1 CrossOver- 00:07:39.938 [2024-11-20 07:03:44.480498] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:40.195 #32 NEW cov: 11187 ft: 18263 corp: 10/73b lim: 8 exec/s: 32 rss: 76Mb L: 8/8 MS: 1 ChangeBit- 00:07:40.195 [2024-11-20 07:03:44.601287] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:40.195 #33 NEW cov: 11187 ft: 18345 corp: 11/81b lim: 8 exec/s: 33 rss: 76Mb L: 8/8 MS: 1 ChangeBit- 00:07:40.195 [2024-11-20 07:03:44.711396] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:40.453 #34 NEW cov: 11187 ft: 18558 corp: 12/89b lim: 8 exec/s: 34 rss: 76Mb L: 8/8 MS: 1 ChangeBit- 00:07:40.453 [2024-11-20 07:03:44.832525] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:40.453 #40 NEW cov: 11187 ft: 18661 corp: 13/97b lim: 8 exec/s: 40 rss: 76Mb L: 8/8 MS: 1 ChangeBit- 00:07:40.453 [2024-11-20 07:03:44.942651] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:40.711 #41 NEW cov: 11194 ft: 18696 corp: 14/105b lim: 8 exec/s: 41 rss: 76Mb L: 8/8 MS: 1 CrossOver- 00:07:40.711 [2024-11-20 07:03:45.065589] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:40.711 #42 NEW cov: 11194 ft: 18882 corp: 15/113b lim: 8 exec/s: 42 rss: 76Mb L: 8/8 MS: 1 ChangeBit- 00:07:40.711 [2024-11-20 07:03:45.176508] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:40.711 #43 NEW cov: 11194 ft: 19252 corp: 16/121b lim: 8 exec/s: 21 rss: 76Mb L: 8/8 MS: 1 ChangeBinInt- 00:07:40.711 #43 DONE cov: 11194 ft: 19252 corp: 16/121b lim: 8 exec/s: 21 rss: 76Mb 00:07:40.711 Done 43 runs in 2 second(s) 00:07:40.711 [2024-11-20 07:03:45.266794] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:07:40.971 07:03:45 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:07:40.971 07:03:45 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:40.971 07:03:45 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:40.971 07:03:45 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:40.971 07:03:45 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:07:40.971 07:03:45 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:40.971 07:03:45 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:40.971 07:03:45 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:40.971 07:03:45 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:07:40.971 07:03:45 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:07:40.971 07:03:45 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:07:40.971 07:03:45 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:07:40.971 07:03:45 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:40.971 07:03:45 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:40.971 07:03:45 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:40.971 07:03:45 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:07:40.971 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:40.971 07:03:45 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:40.971 07:03:45 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:40.971 07:03:45 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:07:41.229 [2024-11-20 07:03:45.529273] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:07:41.229 [2024-11-20 07:03:45.529349] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3787850 ] 00:07:41.229 [2024-11-20 07:03:45.607047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.229 [2024-11-20 07:03:45.647938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.487 INFO: Running with entropic power schedule (0xFF, 100). 00:07:41.487 INFO: Seed: 2728269868 00:07:41.487 INFO: Loaded 1 modules (385410 inline 8-bit counters): 385410 [0x2c0db4c, 0x2c6bcce), 00:07:41.487 INFO: Loaded 1 PC tables (385410 PCs): 385410 [0x2c6bcd0,0x324d4f0), 00:07:41.487 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:41.487 INFO: A corpus is not provided, starting from an empty corpus 00:07:41.487 #2 INITED exec/s: 0 rss: 65Mb 00:07:41.487 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:41.487 This may also happen if the target rejected all inputs we tried so far 00:07:41.487 [2024-11-20 07:03:45.887972] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:07:42.002 NEW_FUNC[1/675]: 0x43cc68 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:07:42.002 NEW_FUNC[2/675]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:42.002 #193 NEW cov: 11161 ft: 10848 corp: 2/33b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:07:42.002 #194 NEW cov: 11178 ft: 14106 corp: 3/65b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:42.262 NEW_FUNC[1/1]: 0x1c01378 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:42.262 #200 NEW cov: 11195 ft: 15881 corp: 4/97b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 CMP- DE: "\004\000\000\000\000\000\000\000"- 00:07:42.520 #201 NEW cov: 11195 ft: 16663 corp: 5/129b lim: 32 exec/s: 201 rss: 75Mb L: 32/32 MS: 1 ChangeByte- 00:07:42.777 #202 NEW cov: 11195 ft: 16963 corp: 6/161b lim: 32 exec/s: 202 rss: 75Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:42.777 #208 NEW cov: 11195 ft: 17347 corp: 7/193b lim: 32 exec/s: 208 rss: 75Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:43.034 #209 NEW cov: 11195 ft: 18080 corp: 8/225b lim: 32 exec/s: 209 rss: 75Mb L: 32/32 MS: 1 ChangeByte- 00:07:43.292 #216 NEW cov: 11202 ft: 18297 corp: 9/257b lim: 32 exec/s: 216 rss: 76Mb L: 32/32 MS: 2 InsertRepeatedBytes-InsertRepeatedBytes- 00:07:43.551 #217 NEW cov: 11202 ft: 18366 corp: 10/289b lim: 32 exec/s: 108 rss: 76Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:43.551 #217 DONE cov: 11202 ft: 18366 corp: 10/289b lim: 32 exec/s: 108 rss: 76Mb 00:07:43.551 ###### Recommended dictionary. ###### 00:07:43.551 "\004\000\000\000\000\000\000\000" # Uses: 0 00:07:43.551 ###### End of recommended dictionary. ###### 00:07:43.551 Done 217 runs in 2 second(s) 00:07:43.551 [2024-11-20 07:03:47.877788] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:07:43.551 07:03:48 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:07:43.551 07:03:48 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:43.551 07:03:48 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:43.551 07:03:48 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:43.551 07:03:48 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:07:43.551 07:03:48 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:43.551 07:03:48 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:43.551 07:03:48 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:43.551 07:03:48 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:07:43.551 07:03:48 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:07:43.551 07:03:48 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:07:43.551 07:03:48 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:07:43.551 07:03:48 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:43.551 07:03:48 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:43.551 07:03:48 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:43.810 07:03:48 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:07:43.810 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:43.810 07:03:48 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:43.810 07:03:48 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:43.810 07:03:48 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:07:43.810 [2024-11-20 07:03:48.144925] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:07:43.810 [2024-11-20 07:03:48.144997] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3788267 ] 00:07:43.810 [2024-11-20 07:03:48.223565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.810 [2024-11-20 07:03:48.263623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.069 INFO: Running with entropic power schedule (0xFF, 100). 00:07:44.069 INFO: Seed: 1057306536 00:07:44.069 INFO: Loaded 1 modules (385410 inline 8-bit counters): 385410 [0x2c0db4c, 0x2c6bcce), 00:07:44.069 INFO: Loaded 1 PC tables (385410 PCs): 385410 [0x2c6bcd0,0x324d4f0), 00:07:44.069 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:44.069 INFO: A corpus is not provided, starting from an empty corpus 00:07:44.069 #2 INITED exec/s: 0 rss: 66Mb 00:07:44.069 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:44.069 This may also happen if the target rejected all inputs we tried so far 00:07:44.069 [2024-11-20 07:03:48.512400] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:07:44.587 NEW_FUNC[1/665]: 0x43d4e8 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:07:44.587 NEW_FUNC[2/665]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:44.587 #21 NEW cov: 11024 ft: 11128 corp: 2/33b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 4 CMP-ChangeBinInt-ChangeByte-InsertRepeatedBytes- DE: "\000\000\000\000\000\000\000\000"- 00:07:44.847 NEW_FUNC[1/10]: 0x15bdf78 in handle_cmd_rsp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:2506 00:07:44.847 NEW_FUNC[2/10]: 0x186e8d8 in _is_io_flags_valid /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ns_cmd.c:141 00:07:44.847 #22 NEW cov: 11180 ft: 14190 corp: 3/65b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:44.847 NEW_FUNC[1/1]: 0x1c01378 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:44.847 #23 NEW cov: 11197 ft: 15802 corp: 4/97b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 CopyPart- 00:07:45.107 #24 NEW cov: 11197 ft: 16883 corp: 5/129b lim: 32 exec/s: 24 rss: 74Mb L: 32/32 MS: 1 ChangeByte- 00:07:45.367 #30 NEW cov: 11197 ft: 17343 corp: 6/161b lim: 32 exec/s: 30 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:45.367 #41 NEW cov: 11197 ft: 17502 corp: 7/193b lim: 32 exec/s: 41 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:45.625 #42 NEW cov: 11197 ft: 17557 corp: 8/225b lim: 32 exec/s: 42 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:07:45.884 #43 NEW cov: 11204 ft: 17757 corp: 9/257b lim: 32 exec/s: 43 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:07:46.144 #44 NEW cov: 11204 ft: 17980 corp: 10/289b lim: 32 exec/s: 44 rss: 75Mb L: 32/32 MS: 1 ChangeBit- 00:07:46.144 #45 NEW cov: 11204 ft: 18084 corp: 11/321b lim: 32 exec/s: 22 rss: 75Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:46.144 #45 DONE cov: 11204 ft: 18084 corp: 11/321b lim: 32 exec/s: 22 rss: 75Mb 00:07:46.144 ###### Recommended dictionary. ###### 00:07:46.144 "\000\000\000\000\000\000\000\000" # Uses: 0 00:07:46.144 ###### End of recommended dictionary. ###### 00:07:46.144 Done 45 runs in 2 second(s) 00:07:46.144 [2024-11-20 07:03:50.660785] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:07:46.402 07:03:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:07:46.402 07:03:50 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:46.402 07:03:50 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:46.402 07:03:50 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:46.402 07:03:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:07:46.402 07:03:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:46.402 07:03:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:46.402 07:03:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:46.402 07:03:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:07:46.402 07:03:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:07:46.402 07:03:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:07:46.402 07:03:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:07:46.402 07:03:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:46.402 07:03:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:46.402 07:03:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:46.402 07:03:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:07:46.402 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:46.402 07:03:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:46.402 07:03:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:46.402 07:03:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:07:46.402 [2024-11-20 07:03:50.929555] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:07:46.402 [2024-11-20 07:03:50.929643] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3788681 ] 00:07:46.661 [2024-11-20 07:03:51.012349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.661 [2024-11-20 07:03:51.053232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.920 INFO: Running with entropic power schedule (0xFF, 100). 00:07:46.920 INFO: Seed: 3840308197 00:07:46.920 INFO: Loaded 1 modules (385410 inline 8-bit counters): 385410 [0x2c0db4c, 0x2c6bcce), 00:07:46.920 INFO: Loaded 1 PC tables (385410 PCs): 385410 [0x2c6bcd0,0x324d4f0), 00:07:46.920 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:46.920 INFO: A corpus is not provided, starting from an empty corpus 00:07:46.920 #2 INITED exec/s: 0 rss: 66Mb 00:07:46.920 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:46.920 This may also happen if the target rejected all inputs we tried so far 00:07:46.920 [2024-11-20 07:03:51.294207] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:07:46.920 [2024-11-20 07:03:51.353640] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:46.920 [2024-11-20 07:03:51.353677] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:47.440 NEW_FUNC[1/676]: 0x43dee8 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:07:47.440 NEW_FUNC[2/676]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:47.440 #74 NEW cov: 11172 ft: 10746 corp: 2/14b lim: 13 exec/s: 0 rss: 73Mb L: 13/13 MS: 2 InsertRepeatedBytes-InsertByte- 00:07:47.440 [2024-11-20 07:03:51.821145] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:47.440 [2024-11-20 07:03:51.821187] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:47.440 #80 NEW cov: 11186 ft: 13878 corp: 3/27b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 CrossOver- 00:07:47.699 [2024-11-20 07:03:52.020277] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:47.699 [2024-11-20 07:03:52.020307] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:47.699 NEW_FUNC[1/1]: 0x1c01378 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:47.699 #81 NEW cov: 11206 ft: 14933 corp: 4/40b lim: 13 exec/s: 0 rss: 75Mb L: 13/13 MS: 1 ShuffleBytes- 00:07:47.699 [2024-11-20 07:03:52.219401] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:47.699 [2024-11-20 07:03:52.219431] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:47.958 #87 NEW cov: 11206 ft: 15990 corp: 5/53b lim: 13 exec/s: 87 rss: 75Mb L: 13/13 MS: 1 ChangeByte- 00:07:47.958 [2024-11-20 07:03:52.418471] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:47.958 [2024-11-20 07:03:52.418499] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:48.217 #88 NEW cov: 11206 ft: 16101 corp: 6/66b lim: 13 exec/s: 88 rss: 75Mb L: 13/13 MS: 1 ChangeBit- 00:07:48.217 [2024-11-20 07:03:52.619006] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:48.217 [2024-11-20 07:03:52.619036] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:48.217 #89 NEW cov: 11206 ft: 16305 corp: 7/79b lim: 13 exec/s: 89 rss: 75Mb L: 13/13 MS: 1 CMP- DE: "\001\007"- 00:07:48.477 [2024-11-20 07:03:52.820948] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:48.477 [2024-11-20 07:03:52.820979] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:48.477 #90 NEW cov: 11206 ft: 16414 corp: 8/92b lim: 13 exec/s: 90 rss: 75Mb L: 13/13 MS: 1 ChangeByte- 00:07:48.477 [2024-11-20 07:03:53.023880] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:48.477 [2024-11-20 07:03:53.023912] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:48.735 #91 NEW cov: 11213 ft: 16907 corp: 9/105b lim: 13 exec/s: 91 rss: 75Mb L: 13/13 MS: 1 PersAutoDict- DE: "\001\007"- 00:07:48.735 [2024-11-20 07:03:53.223403] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:48.735 [2024-11-20 07:03:53.223434] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:49.018 #92 NEW cov: 11213 ft: 17230 corp: 10/118b lim: 13 exec/s: 46 rss: 76Mb L: 13/13 MS: 1 CopyPart- 00:07:49.018 #92 DONE cov: 11213 ft: 17230 corp: 10/118b lim: 13 exec/s: 46 rss: 76Mb 00:07:49.018 ###### Recommended dictionary. ###### 00:07:49.018 "\001\007" # Uses: 1 00:07:49.018 ###### End of recommended dictionary. ###### 00:07:49.018 Done 92 runs in 2 second(s) 00:07:49.018 [2024-11-20 07:03:53.361792] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:07:49.352 07:03:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:07:49.352 07:03:53 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:49.352 07:03:53 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:49.352 07:03:53 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:49.352 07:03:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:07:49.352 07:03:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:49.352 07:03:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:49.352 07:03:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:49.352 07:03:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:07:49.352 07:03:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:07:49.352 07:03:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:07:49.352 07:03:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:07:49.352 07:03:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:49.352 07:03:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:49.352 07:03:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:49.352 07:03:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:07:49.352 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:49.352 07:03:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:49.352 07:03:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:49.352 07:03:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:07:49.352 [2024-11-20 07:03:53.623121] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:07:49.352 [2024-11-20 07:03:53.623192] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3789215 ] 00:07:49.352 [2024-11-20 07:03:53.701757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.353 [2024-11-20 07:03:53.741154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.622 INFO: Running with entropic power schedule (0xFF, 100). 00:07:49.622 INFO: Seed: 2230323888 00:07:49.622 INFO: Loaded 1 modules (385410 inline 8-bit counters): 385410 [0x2c0db4c, 0x2c6bcce), 00:07:49.622 INFO: Loaded 1 PC tables (385410 PCs): 385410 [0x2c6bcd0,0x324d4f0), 00:07:49.622 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:49.622 INFO: A corpus is not provided, starting from an empty corpus 00:07:49.622 #2 INITED exec/s: 0 rss: 66Mb 00:07:49.622 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:49.622 This may also happen if the target rejected all inputs we tried so far 00:07:49.622 [2024-11-20 07:03:53.979173] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:07:49.622 [2024-11-20 07:03:54.038636] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:49.622 [2024-11-20 07:03:54.038672] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:49.881 NEW_FUNC[1/676]: 0x43ebd8 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:07:49.881 NEW_FUNC[2/676]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:49.881 #24 NEW cov: 11158 ft: 10781 corp: 2/10b lim: 9 exec/s: 0 rss: 72Mb L: 9/9 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:50.140 [2024-11-20 07:03:54.502894] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:50.140 [2024-11-20 07:03:54.502941] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:50.140 #34 NEW cov: 11172 ft: 14347 corp: 3/19b lim: 9 exec/s: 0 rss: 73Mb L: 9/9 MS: 5 EraseBytes-InsertByte-ShuffleBytes-InsertByte-CopyPart- 00:07:50.140 [2024-11-20 07:03:54.692026] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:50.140 [2024-11-20 07:03:54.692058] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:50.399 NEW_FUNC[1/1]: 0x1c01378 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:50.399 #35 NEW cov: 11192 ft: 15872 corp: 4/28b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 ChangeByte- 00:07:50.399 [2024-11-20 07:03:54.891992] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:50.399 [2024-11-20 07:03:54.892023] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:50.658 #36 NEW cov: 11192 ft: 16793 corp: 5/37b lim: 9 exec/s: 36 rss: 74Mb L: 9/9 MS: 1 ChangeBit- 00:07:50.658 [2024-11-20 07:03:55.083002] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:50.658 [2024-11-20 07:03:55.083031] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:50.658 #37 NEW cov: 11192 ft: 17202 corp: 6/46b lim: 9 exec/s: 37 rss: 74Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:50.916 [2024-11-20 07:03:55.275375] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:50.916 [2024-11-20 07:03:55.275404] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:50.916 #38 NEW cov: 11192 ft: 17363 corp: 7/55b lim: 9 exec/s: 38 rss: 74Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:50.916 [2024-11-20 07:03:55.466475] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:50.916 [2024-11-20 07:03:55.466506] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:51.175 #39 NEW cov: 11192 ft: 17529 corp: 8/64b lim: 9 exec/s: 39 rss: 74Mb L: 9/9 MS: 1 ChangeBit- 00:07:51.175 [2024-11-20 07:03:55.659009] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:51.175 [2024-11-20 07:03:55.659040] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:51.434 #40 NEW cov: 11199 ft: 17728 corp: 9/73b lim: 9 exec/s: 40 rss: 74Mb L: 9/9 MS: 1 CrossOver- 00:07:51.434 [2024-11-20 07:03:55.850073] vfio_user.c:3102:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:51.434 [2024-11-20 07:03:55.850104] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:51.434 #43 NEW cov: 11199 ft: 18001 corp: 10/82b lim: 9 exec/s: 21 rss: 75Mb L: 9/9 MS: 3 CrossOver-ChangeBit-InsertRepeatedBytes- 00:07:51.434 #43 DONE cov: 11199 ft: 18001 corp: 10/82b lim: 9 exec/s: 21 rss: 75Mb 00:07:51.434 Done 43 runs in 2 second(s) 00:07:51.434 [2024-11-20 07:03:55.985814] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:07:51.693 07:03:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:07:51.693 07:03:56 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:51.693 07:03:56 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:51.693 07:03:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:07:51.693 00:07:51.693 real 0m19.318s 00:07:51.693 user 0m27.069s 00:07:51.693 sys 0m1.834s 00:07:51.693 07:03:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:51.693 07:03:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:51.693 ************************************ 00:07:51.693 END TEST vfio_llvm_fuzz 00:07:51.693 ************************************ 00:07:51.953 00:07:51.953 real 1m24.250s 00:07:51.953 user 2m7.892s 00:07:51.953 sys 0m9.953s 00:07:51.953 07:03:56 llvm_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:51.953 07:03:56 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:51.953 ************************************ 00:07:51.953 END TEST llvm_fuzz 00:07:51.953 ************************************ 00:07:51.953 07:03:56 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:07:51.953 07:03:56 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:07:51.953 07:03:56 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:07:51.953 07:03:56 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:51.953 07:03:56 -- common/autotest_common.sh@10 -- # set +x 00:07:51.953 07:03:56 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:07:51.953 07:03:56 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:07:51.953 07:03:56 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:07:51.953 07:03:56 -- common/autotest_common.sh@10 -- # set +x 00:07:58.521 INFO: APP EXITING 00:07:58.521 INFO: killing all VMs 00:07:58.521 INFO: killing vhost app 00:07:58.521 INFO: EXIT DONE 00:08:00.426 Waiting for block devices as requested 00:08:00.426 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:00.685 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:00.685 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:00.685 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:00.685 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:00.945 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:00.945 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:00.945 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:01.205 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:01.205 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:01.205 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:01.464 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:01.464 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:01.464 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:01.723 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:01.723 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:01.723 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:08:05.918 Cleaning 00:08:05.918 Removing: /dev/shm/spdk_tgt_trace.pid3760972 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3758491 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3759645 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3760972 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3761426 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3762511 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3762537 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3763647 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3763657 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3764092 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3764416 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3764737 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3765075 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3765202 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3765445 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3765727 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3766041 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3766896 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3769906 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3770095 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3770392 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3770398 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3770967 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3771099 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3771678 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3771797 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3772091 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3772106 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3772394 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3772408 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3773039 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3773174 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3773358 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3773683 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3774239 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3774728 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3775259 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3775670 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3776083 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3776618 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3777135 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3777458 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3777981 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3778510 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3778968 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3779456 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3780015 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3780408 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3781232 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3781796 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3782202 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3782620 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3783149 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3783456 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3783969 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3784505 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3784799 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3785329 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3785812 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3786398 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3786777 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3787316 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3787850 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3788267 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3788681 00:08:05.918 Removing: /var/run/dpdk/spdk_pid3789215 00:08:05.918 Clean 00:08:05.918 07:04:09 -- common/autotest_common.sh@1451 -- # return 0 00:08:05.918 07:04:09 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:08:05.918 07:04:09 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:05.919 07:04:09 -- common/autotest_common.sh@10 -- # set +x 00:08:05.919 07:04:10 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:08:05.919 07:04:10 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:05.919 07:04:10 -- common/autotest_common.sh@10 -- # set +x 00:08:05.919 07:04:10 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:05.919 07:04:10 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:08:05.919 07:04:10 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:08:05.919 07:04:10 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:08:05.919 07:04:10 -- spdk/autotest.sh@394 -- # hostname 00:08:05.919 07:04:10 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -t spdk-wfp-20 -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info 00:08:05.919 geninfo: WARNING: invalid characters removed from testname! 00:08:09.207 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcda 00:08:12.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcda 00:08:14.406 07:04:18 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:22.527 07:04:26 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:27.808 07:04:31 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:33.139 07:04:36 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:38.411 07:04:42 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:43.682 07:04:47 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:49.061 07:04:52 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:08:49.061 07:04:52 -- spdk/autorun.sh@1 -- $ timing_finish 00:08:49.061 07:04:52 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt ]] 00:08:49.061 07:04:52 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:08:49.061 07:04:52 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:08:49.061 07:04:52 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:49.061 + [[ -n 3648610 ]] 00:08:49.061 + sudo kill 3648610 00:08:49.070 [Pipeline] } 00:08:49.086 [Pipeline] // stage 00:08:49.092 [Pipeline] } 00:08:49.106 [Pipeline] // timeout 00:08:49.111 [Pipeline] } 00:08:49.127 [Pipeline] // catchError 00:08:49.132 [Pipeline] } 00:08:49.147 [Pipeline] // wrap 00:08:49.152 [Pipeline] } 00:08:49.166 [Pipeline] // catchError 00:08:49.174 [Pipeline] stage 00:08:49.177 [Pipeline] { (Epilogue) 00:08:49.189 [Pipeline] catchError 00:08:49.191 [Pipeline] { 00:08:49.203 [Pipeline] echo 00:08:49.205 Cleanup processes 00:08:49.210 [Pipeline] sh 00:08:49.498 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:49.498 3797573 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:49.511 [Pipeline] sh 00:08:49.795 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:49.795 ++ grep -v 'sudo pgrep' 00:08:49.795 ++ awk '{print $1}' 00:08:49.795 + sudo kill -9 00:08:49.795 + true 00:08:49.807 [Pipeline] sh 00:08:50.093 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:08:50.093 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:08:50.093 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:08:51.472 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:09:01.474 [Pipeline] sh 00:09:01.761 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:09:01.761 Artifacts sizes are good 00:09:01.776 [Pipeline] archiveArtifacts 00:09:01.783 Archiving artifacts 00:09:01.942 [Pipeline] sh 00:09:02.228 + sudo chown -R sys_sgci: /var/jenkins/workspace/short-fuzz-phy-autotest 00:09:02.243 [Pipeline] cleanWs 00:09:02.254 [WS-CLEANUP] Deleting project workspace... 00:09:02.254 [WS-CLEANUP] Deferred wipeout is used... 00:09:02.261 [WS-CLEANUP] done 00:09:02.263 [Pipeline] } 00:09:02.281 [Pipeline] // catchError 00:09:02.294 [Pipeline] sh 00:09:02.631 + logger -p user.info -t JENKINS-CI 00:09:02.640 [Pipeline] } 00:09:02.654 [Pipeline] // stage 00:09:02.660 [Pipeline] } 00:09:02.676 [Pipeline] // node 00:09:02.681 [Pipeline] End of Pipeline 00:09:02.719 Finished: SUCCESS